In this past month I was working on the update of the lime-app dependencies (it was quite outdated). I also worked on the view and the ubus module that reads and saves ground routing settings in the LiMe config file.
It is the minimum configuration of a plugin for lime-app. It has defined the constants, the store, actions (set and get) and basic epics to obtain the data using uhttp-mod-ubus.
Lime-app uses Preact for rendering the views, redux for state management and rxjs-observable as middleware for asynchronous events. For now you only get the setting as a json and expose it to the user.
As I mentioned in the last blog post [0], the first step I did is defining the app’s functionalities[1] and designing the app architecture.
Basically the app contains 1 service, which runs in the background and communicates with UI thread per broadcast (public-subscriber pattern). Since as default, the service will run in the main thread, which is not wanted, I created a ScanThread to handle the scanning. It sends every 2s as default (and should be adapted with user’s speed etc. later) a scan request to the WifiLocator and gets a scan result back from it asynchronous. The WifiValidator then validates the scan result as well as the returned location, and puts the valid wifi access point in a WifiQueue. The WifiStorer will take anything from the WifiQueue and writes it to local disk (simple Consumer-Producer pattern). Based on the user’s upload mode setting, the WifiReader will be triggered if uploading is wanted, reads the local disk files in wanted format for uploading and passes it to the WifiUploader. It then uploads the data to any supported project api and as soon as the uploading is successful, the data will be deleted and ranking will be updated.
The next step I did is designing the new UI, got some feedback from mentor and changed it appropriately. Also in the process I defined all the user’s setting options. I spent a lot of time reading the android documentation for parallel processing and made decision for each functionalities, which is relevant for the next part. (WifiStorer, WifiReader: normal Thread, WifiUploader: AsynTaskLoader etc.) I write more about it in the next post.
Finally I jumped into implementing. I started with the demo mockup and then slowly implemented the logic part. I have finished the scan service and a part of the WifiValidator.The WifiLocator uses gps for defining location if available, otherwise it makes a request to openwifi.su with the surrounding wifis. I provided methods to do it with both new and old openwifi.su api in case we want to use any of them in the future. I ran into an android bug, where the wifi scan result is always 0 if the user disables GPS, even the location permission is granted.(Tested on Android 6). It is kind of weird because scanning wifi does not have anything to do with the gps and turning on gps the whole time will cost phone a lot of energy. Still it’s kind of wanted feature from Android to make users aware that their location information is being accessed when they use kind of app. Because the location of user’s phone could be defined based on the collected wifis. Since it is OS design, I pop up users a message with those information to ask them to turn on their GPS if they turn it off. I also implemented a part of WifiValidator, the WifiFilterer to check if an access point is openwifi, from freifunk or mobil hotspot or marked with _nomap (which should not be collected).
What’s now?
If you want to check the app, feel free to download install file .apk from [2].
If you as usual do not want to install an unknown source, I also provide a short demo video
What’s next?
In the next time, I will finish the WifiValidator, which should not only filter the access point but also validate the location to provide scan service a better scan period to save energy (in case the location is not changed for a long time, the scan service should be stopped etc.) and then other parts as shown in architecture image above.
After a month of work on the project, the Turnantenna’s driver is evolving to the definitive version.
During this month I worked hard in order to write a good driver for the stepper motors. If you’re looking for more details and want to understand the basic functioning of the Turnantenna system look at my first blog. My work could be find on GitHub.
Overview
Image found at http://abhieeeprojects.blogspot.com
From wikipedia:
a unipolar stepper motor has one winding with center tap per phase. Each section of windings is switched on for each direction of magnetic field.
To control the position of the rotor, we have to play with the stator’s windings, following a proper pattern in order to make the first move smoothly.
The driver does exactly this, it turns ON and OFF the pins of the logical board (the orange pi) following the correct pattern; doing so the coils of the engine are powered properly, and the rotor turns.
The older version of the driver was a good prototype, and I based my work on it. As a good starting point, I didn’t change the overall scheme, but I found some problems with my mentor and I worked to solve them. The most important issues were the following.
Problem #1 : A poor control of inputs
We used python 3 to write the code. The driver is a class, called Stepper, that has some methods. Calling those methods, the object controls the real engine. To do so we needed to use the python wiringpi library.
The old code was something like this:
import wiringpi as w
class stepper():
def __init__(self, pin1, pin2, pin3, pin4):
"""Set up the GPIO pins and all the attributes"""
w.some_method_to_initialize_the_board()
w.some_method_to_configure_the_pins(pin#)
self.initial_attributes = XYZ
...
def stop(self):
"""make the engine stops"""
w.some_method_to_turn_off_the_pin(pin1)
w.some_method_to_turn_off_the_pin(pin2)
...
def move(self,speed,rel=1,dir=1):
"""make the engine run somehow"""
...
engine = Stepper(0,1,5,12) # Random pin numbers
engine.move(250, 100)
To make sure that everything works, inputs should be controlled properly:
Pins choice should be controlled, different coils can’t use the same pin.
Values should be integers, not chars, lists, tuples or others
At this time, I don’t know what happen if I run the engine with negative speed. Will it go back?
This kind of control was lacking in the older version. This is our first problem.
Problem #2 : Too many input variable
Look at the “move” method, could it be simplified? The answer is yes.
dir (direction) parameter is not so useful. It was intended to switch between “go ahead” and “go backward”. But if I want to go in the opposite direction, I could say I want to do a movement with a negative speed. In the newer version I adopted this scheme allowing negative values for the speed, and removing the dir parameter.
rel (relative) could be removed too. Its function is to keep in memory the last position of the engine, but it could be better done using an object’s attribute.
Problem #3 : Wrong speed management
Thanks to Leonardo (my mentor) who discovered this problem, I had to plot the following graph that demonstrates the presence of an error in the algorithm used to accelerate the engine. To understand the problem, you need to see that the driver was intended to make the engine accelerate with a constant rate. In other words, acceleration has to be constant [1], and speed has to increase linearly [2].
Note: in the real code I added an acceleration factor which allows to manage the magnitude of this acceleration. That doesn’t impact on the constant acceleration hypothesis.
The simplified version of the algorithm is something like this:
num_step stands for the total number of steps that should be done in the acceleration phase;
final_speed is the goal, the speed wanted by the user;
speed is the actual speed value. Here we are in acceleration phase, and that’s why we’re starting from speed=0
It’s clear that the algorithm increase the speed constantly (and linearly). So the condition [2] seems to be satisfied.
However, speed is not directly used. To manage the step frequency, the time delay is used instead: this simplify the algorithm. But there is a problem, and this graph demonstrate that hypothesis [1] is not verified:
The graph shows a first interval where speed increases (acceleration), another one where speed is constant, and the last one where the engines slows down (deceleration). We can say that deceleration is a negative acceleration, in fact the two external intervals are specular.
In the graph we can note an hyperbolic shape of the acceleration phases. That’s a proof of the non-constant acceleration. The problem is that we expected a different graph, and wanted to see a linear shape of the speed function. The driver works, but not as we like.
The bug here is in time managing, since it don’t decrease linearly. Time is defined as t=1/speed, and that’s an equilateral hyperbola’s equation. Right now I’m working on a solution to obtain a linear, simple equation.
Solving problems : Make it clear
As this code will be published, I added a lot of documentation like comments, docstrings, and more verbose and specific error handlers. I rebuilt the entire move method to make it more clear and readable, and to solve the problems listed before. I wrote it down simultaneously with the tests, and it’s still work in progress just because it has to be bullet proof.
Tests
I can say that tests was the core activity of this month. I wrote tests, done tests, tests the tests, delete some tests, write new tests, correct other tests, added new tests, and still testing.
All the improvements done to the original code, was born from tests. I’m a newbie with python, and testing opened my mind to a tons of things about coding. Now I’m still testing and deepen the code, and how to improve it.
Conclusion
Now this first period is almost finished, and it’s time to conclude the last things. After that I’ll starting develop the web interface to control the driver from remote.
I make no secret of the fact that this project is a very challenge for me. This is my first serious coding experience. Almost everything is new, but the Ninux community is really supporting me. A special thanks goes to Leonardo and Edoardo, and their patient.
insight into how signalling works through an external server
The given examples helped me to collect a knowledge- as well as a codebase which I will further use to implement video and audio transmission over WebRTC.
Evaluation, hacking and testing of those examples helped me to get a understanding of the inner-workings of WebRTC and will surely support me in the Integration of WebRTC into Meshenger.
Here are some screenshots of the current state of the application:
Contact listScannable QR-CodeQR-ScannerManual information exchange
My next step will be the adaptation of said projects to the newest WebRTC-version
as well as further dealing with the fundamentals of WebRTC and finding a way to circumvent a central server.
During the last few weeks I jumped into LibreNet6 and started on setting up a local testbed. With a couple of routers and an virtual machine with real a IPv6 subnet I followed the current Setup (Spanish) guide and eventually got it running. The process had various stumbling stones and is rather unpleasant to setup. In a future setup I’ll try to make the setup as simple as possible, only involving the installation of a single package.
The need of LibreNet6
Simply said, LibreNet6 allows using the IPv6 functionality of LibreMesh (LiMe). With a single configuration file at /etc/config/lime it’s possible to set nearly all functionality of the LiMe framework, from access points, mesh connects, used addresses to activated routing protocols. In the default configuration all nodes have a /64 IPv6 subnet defined which is pseudo randomly generated based on the hash of the defined network name, which thereby all nodes of a (Layer2) mesh cloud share. The subnet is part of Altermundi’s address space, enabling in theory public IPv6 addresses to all nodes and clients of the LiMe cloud. However, most mesh gateways don’t have a direct connection to Altermundi.
There comes LibreNet6, it connects via a Tinc mesh multiple community networks which only have Internet access via a NATed IPv4 address. Only the cloud gateways (CG) have to use babeld, within the mesh network other routing protocols can be used. All the CG has to do is announce public IPv6 uplink to the rest of it’s cloud. Once multiple mesh networks are linked together their clients can start connecting directly via IPv6. A feature of Tinc is to perform NAT traversal so both CG’s may connect directly with one another to avoid routing all traffic over the IPv6 server.
One of the advantages of LibreNet6 is to handle multiple IPv6 server and CG at the same time. Babeld allows to choose the fastest connection within the Tinc mesh and in mesh clouds the used mesh routing protocol decides which CG to take.
Speeding up development
I’m not completely new to the LiMe code and contributed on various end within the last years (motivated by my last years GSoC). Developing and testing new software were always tedious as all packages had to be created individually per targets architecture. To speed up this process I spent some time on settings up automatic snapshot builds for LiMe take care of automatic updating of LiMe snapshot repository. As nearly all LiMe code is Lua, it’s unnecessary to compile packages for all targets. To have a single package running on all architectures, the PKGARCH:=all settings can be used in packages Makefiles, and so I did. As a result, LiMe has now CI and a constantly updated snapshot repository, this will allow me (and other LiMe devs) to accelerate the development and testing of new functionality and packages!
Evaluation of the current LibreNet6 state
So far the setup was roughly like that:
Using Tinc 1.0 with a GitHub repository to share public keys, which were then deployed on servers.
Babel were installed manually on nodes requiring execution of various bash scripts.
With the previously mentioned testbed I tried some new software and came up with an easier setup which stays compatible with already deployed connections:
Use Tinc 1.1 with all it’s new feature called invite and join allows clients to connect simply by running Tinc with a given invite url. This also handels key creation & exchange and setup of all Tinc related configuration files via a invitation-created script.
Offer a lime-app to execute Tinc’s join command via web interface and show state of connection, like a simple IPv6 ping check.
Create a simple admin interface to show connected cloud gateways and used IPv6 subnets.
Next steps
So far I spent most of the time on understanding LibreNet6, babeld, Tinc and CI and setting up a running testbed. Next week I’ll create a LiMe package to be installed on CG’s, setting up babeld and Tinc. Also I’ll dig into the lime-app to understand the web framework and offer a simple interface for users. Lastly I’ll write a guide for server owners how they can setup the IPv6 server on a Debian system, using real IPv6 or 6to4 tunnels in case only a public IPv4 is available.
In this blog post I’d like to present the recent changes made in Eewids, why they were done and what’s to come next. For an introduction of Eewids see here.
In general the steps done the last weeks aimed mainly at the easiness of use and testing the main concept – having an easily expandable framework at hand. Thus, a RogueAP detection was added and visualization based on InfluxData tools and Grafana were included. Both steps were much more easy to achieve because of the architecture of Eewids.
Starting Eewids most easily
For everyone potentially interested in using Eewids it would have been a big hassle to compile Kismet (git development version) by herself. As Eewids is completely based on Docker container most of the components didn’t need to get installed. And that’s quite important. No one wants to compile, start and administrate all the stuff: Kismet, Eewids’ Parser, RabbitMQ, InfluxDB, Telegraf, Grafana and finally the plugins added to Eewids (like the RogueAP detection, see below). While all these components are provided by Docker container and can get started by simply hitting ‘docker-compose up’, the Wi-Fi card had to get accessed directly so far. Therefore, it was necessary to have a recent version of Kismet’s remote capture, which is not included in any major Linux distribution yet.
Luckily Kismet’s developer found a solution to this problem and documented it. We adapted this to the needs of Eewids and now have a solution in which one can start Eewids easily on a local machine, needing nothing more than a compatible Wi-Fi card, docker and docker-compose. Please see the getting-started.md of Eewids for more information and try it yourself! 😉
Renaming fields of captured data
To make the captured data of Eewids as accessible as possible for developers many field names saved in the message broker RabbitMQ were changed to be quite similar to Wireshark’s “Display Filter Reference”. See here.
Hearing Map for RogueAP detection
A simple RogueAP detection which existed before have been expanded by a hearing map. Now a whitelist contains not only valid ESSID:BSSID pairs, but also the information which remote capture is able to see which AP. Thus, an attacker can not use a valid ESSID:BSSID pair of a AP which is located in a different building to cover an EvilTwin attack. See here for more information.
Add a visualization tool: Grafana
We develop Eewids to make it easy to add new functions to it. To test this claim and to actually extend functionality by a way to analyze and visualize what’s happening arround, we added Grafana. It connects easily to different datasets (like InfluxDB, Elastic etc.) and let you create graphs and lists etc. As a starting point we added InfluxDB to save our captured data, Telegraf to get the data out of RabbitMQ and to send them to InfluxDB and Grafana to use the data from the InfluxDB.
Which would have been a hassle to implement on a local machine was quite easy with docker and a already existing dataset provided by Eewids in RabbitMQ. Thus, it only took us some hours to find out how to use this software. Even this time was not related to Eewids itself, but just to the missing basic understanding of Telegraf, InfluxDB and Grafana. That is to say if anyone who already know these tools would have liked to add these to Eewids could have done this easily. And this is the objective of Eewids.
We consider this a successful proof of concept. We used InfluxDB for Grafana, because we expect new things to come which depends on/use InfluxDB. Likewise we can imagine the fast and forward implementation of Elastic and the related tools and software. We’d glad to see this adapted in the future as well. 🙂
What comes next?
Now that we have a visualization tool (Grafana) added, it would make sense to extend it with more information, letting alerts visualized etc. Furthermore, we’d like to improve the “backend” features for developers. That means we would like to create some templates to easily start using Eewids data and adding detection methods. Let’s see how it works out!
Hi,
I’m Nick. I study Computer Engineering at the TU Berlin. It is my first time participating in Google Summer of Code. I am realizing a decentralized WiFi controller.
DAWN is the first decentralized WiFi controller for OpenWrt. The controller provides access to valuable information, e.g., all connected stations, their capabilities, and information about all participating nodes. Moreover, DAWN provides load balancing to increase the network performance by controlling the clients association.
What’s missing?
An important aspect of the controller is the simple installation. Everybody, even people with limited technical knowledge, should use this controller to increase their network performance at home. Until now, DAWN requires a special patched OpenWrt to run. So a user needs to compile his own image. The first thing I have to do is to bring the last patches upstream. Some of the patches were rejected and that is why I have to rewrite different functionality and create new pull requests. Furthermore, I have to extend the libiwinfo library to get all necessary informations from the OpenWrt system.
After this, the configuration of the nodes should be simplified. So far, the user has to configure all participating nodes individually. I want to implement some bootstrapping to automatically configure the participating routers.
After simplifying the installation and configuration, I want to visualize the information of the participating nodes with a graphical user interface.
The last step is to improve the controller functionality by adding mechanisms like a channel interference detection and other useful features. Moreover, this step contains to improve the load balancing.
In my next blog post, I will write about why some of my OpenWrt patches were rejected, how I have to extend the libiwinfo. However, if this steps are successful everybody will be able to simply install DAWN without the need to patch OpenWrt.
Hi – my name is Katharina, I’m 24 years old and a computer science student at the HU-Berlin. I’ve done the GSoC2016 before for qaul.net and have been an active contributer for the past 2 years. This year I want to tackle something we’ve been postponing for a while…
qaul.net is a communication app that’s based on Freifunk technologies (namely OLSR) which enables people with no access to the internet to communicate with each other easily, using the devices they already own (phones, tablets, laptops, etc) and without needing to be experts in networking technologies.
Unfortunately qaul.net has an issue, namely OLSR. The routing backend is based on manipulating kernel tables and as such needs to run in root-mode. Which isn’t possible on many devices (such as phones and tablets). Furthermore it makes heavy use of the WiFi adhoc mode which isn’t present anymore in modern phones and tablets, putting in another reason for users having to root their devices.
We want to change this to make installation easier (a simple .apk file could be dropped onto a normal phone) and support newer devices without having to install any kernel-level modules. So that’s the why. What about the how?
Well, that one is a little tricky to be honest. Large parts of the qaul.net core library are based on the fact that routing is handled externally and some parts of the code are even olsr-specific. There are three main parts to this challenge.
Actually designing a resilient, delay-tolerant routing protocol
Building a networking abstraction layer which can handle multiple backends (Adhoc, WiFi-direct, ethernet, etc)
Integrating the new routing core into the rest of the library.
When it comes to designing the protocol we want to let BATMAN inspire us greatly. Each qaul.net node doesn’t need to know the entire topology of the network, only where to roughly send packages for them to be delivered. This also means we can build a delay tolerant system. This module (which I’ve called routing core for now) will be written in Rust, a low-level programming language which can integrate into the rest of the C source code easily, without being as low-level and feature-sparse as C itself. Developing it as a separate module with an API also means that we can take it out of the context of qaul.net and do experiments with it in different settings.
Building a networking abstraction API will require knowledge of different backends. For this we will do experiments with WiFi direct on Android (and maybe iOS if we find hardware) to see how it behaves, how we can build meshes with it, etc. There are a lot of open questions for this which will need to be answered but hopefully at the end of GSoC2018 we’ll have some answers and code to go along with it.
I’m excited to get working on this. It has been about two years in planning and with it we can make qaul.net more accessible to more people.
Hi, my name is Daniel, I am a 19-year-old student from Augsburg, Germany, and I have been working with Android and networking for several years now.
Thus, i am very excited to participate in the GSoC for the first time, hoping to learn new ways of connecting people with “Meshenger”.
We realized that there are no solutions that provide audio/video communication without using any centralized server while still working in networks such as the Freifunk community networks.
In this context I will try to fill that void, also to demonstrate possible uses of the network created by the Freifunk community in contrast to simply using it as a hotspot for Internet access.
Another issue of Freifunk community networks is that it is often perceived as nothing more than a way to access the Internet. Thus, we want to demonstrate a possible use of community-build networks, which do not neccessarily have access to the Internet by creating a way to phone each other over a local network.
The goal is to make an Android app, which allow audio- as well as videocalls through a local network, not requiring a server.
As seen above, there are several redundant servers for our project which I will need to circumvent.
The signaling, e.g. the exchange of nessecary network-related information will then happen through the scan of a QR-code generated by one of the connecting devices.
WebRTC is an open standard used by many mobile and browser applications like “WhatsApp” to establish a connection in order to make video/audio calls.
It is already implemented in all major browsers as well as systems like Android, thus providing a common base to build our app and, as a potential future goal, to expand it to other platforms.
Of course there are several challenges i will have to tackle, like having to get WebRTC running without a STUN-server, which is normally required.
In the next post i will share my progress as well as my collected experiences, explaining my approaches and trade-offs that will eventually have to be made.
I am Tobias, a Computer Science student at the TU-Berlin. This is my second time participating in GSoC for Freifunk.
I am excited about this project as it helps to reduce the entry barrier for inexperienced users of OpenWrt and its web interface LuCI.
When you look at the current LuCI Webinterface you will notice that it looks fairly decent, especially with the Material Theme.
However for an inexperienced user without a technical background it surely looks scary. All the text full of technical terms with few pictures can look like a book with seven seals.
This project aims to introduce a graphical configuration mode.
To make the configuration interface more connected to the actual router the user owns, we want to display an image of the backside of the ports in the web interface.
The user shall be able to interact with this graphical representation of the router by hovering and clicking on the different parts like LAN ports, antenna etc.
What are the necessary steps to archive this goal?
First, we need pictures of the backside of all the different router models. Here the idea is to collect them via crowdsourcing by the community. Everyone can take a picture of their router and upload it to a Git repository. Also, the location of of the router components must be marked on every picture. For that I will develop a small application which allows the user to annotate a router picture and generates a metadata file.
Second, the annotated pictures need to be integrated into the OpenWrt buildsystem.
Third, a LuCI application needs to be developed to display the result as an interactive graphic in the web interface.
In the next blog post I will go into more detail on the individual steps as well as update you about the progress.