qaul.net – The Conceptual Value of Testing

While working on the visn eventual consistency testing framework for the qaul.net project, I’ve run into an excellent example of one of the most important reasons to test software, in some ways more important than the discovery of regressions, design defects, or other functional issues. Specifically, the ability to determine problems in the conceptual model around which the software is built.

Conceptual Testing

Unit, integration, and acceptance tests are well known for their value in detecting regressions, ensuring that functions, classes, and other units are written in a self-contained and composable style, and ensuring that design goals are met throughout the lifecycle of the project.

In statically typed languages like Rust, however, it can often be tempting to eschew the fine-grained level of unit testing used in dynamic languages, since the compiler checks many of the constraints unit tests are designed to impose. Rust, for example, permits encoding a lot of detail about the presence, or absence, of values with the type system.

In qaul.net’s libqaul, we provide a model for metadata about a user in the UserData struct (from libqaul/src/users/mod.rs):

/// A public representation of user information
///
/// This struct is used for both the local user (identified
/// by `UserAuth`) as well as remote users from the contacts book.
#[derive(Default, Debug, PartialEq, Clone)]
pub struct UserData {
    /// A human readable display-name (like @foobar)
    pub display_name: Option<String>,
    /// A human's preferred call-sign ("Friends call me foo")
    pub real_name: Option<String>,
    /// A key-value list of things the user deems interesting
    /// about themselves. This could be stuff like "gender",
    /// "preferred languages" or whatever.
    pub bio: BTreeMap<String, String>,
    /// The set of services this user runs (should never be empty!)
    pub services: BTreeSet<String>,
    /// A users profile picture (some people like selfies)
    pub avatar: Option<Vec<u8>>,
}

This struct provides Optional types for every field, except for those fields which can contain nothing (like the BTreeMap or BTreeSet), since by design, a user may not have set any of these fields. This works really well for user storage, which was the original purpose of the data structure, but does not work well for user information transmission, as I found out.

Conceptual Problems in the User Model

Initially, libqaul was designed to use the UserData struct for all user data needs, including transmission. Some of the UserData related API surface was the first that I implemented during the initial deployment of visn, and therefore the first API surface to be tested. During that process, I mapped the function Qaul::user_update() to a visn synthetic event, UserUpdate, which carried a UserData to be passed to user_update().

While writing these tests, I encountered a problem: what happens in the case that a user wants to clear a field in their UserData? Do they issue a UserData in which there is an Option::None value in that field (like a null), which is interpreted to mean that the field should be cleared?

This made the user_update() function very easy to implement, since it could simply assign the newly received UserData as the new canonical UserData for that user. That, however, leads to a problem when it comes to data transmission over the actual network. When, for example, a user has set a profile photo or a lot biography fields, the UserData could be pretty large, and retransmitting that on every subsequent update is not very practical.

The act of writing these tests, which were primarily designed to prevent regressions, lead me to implement a delta-based UserData update scheme, wherein the UserData is updated incrementally with small changes. This provides other advantages, too, such as allowing more orderings of those events’ arrivals to result in a valid state for the UserData.

Conceptual Problems in visn

In addition to uncovering problems in the design of libqaul, this process helped me refine my ideas for the visn testing framework. Initially, visn assumed that all operations modelled by synthetic events were infallible, or at least that failure to perform an action should lead to test failure. In fact, a critical component of eventually consistent systems is their ability to reject invalid states, in order to remain robust in the face of serious network problems or malicious input.

Originally, the resolve function took the state of the system under test and an event, and returned the new state (Fn(Event, System) -> System).

To address this problem, visn‘s type model became even more complex, incorporating a separate return type rather than requiring that the function which resolves events always return a successfully transformed state (Fn(Event, System) -> Return), and the infallible variant now simply sets Return to System.

In addition, rather than taking a singular System argument when resolving events, visn now takes a function returning a System, laying the groundwork for supporting multiple permutations of the ordering of queued events.

Conclusion

Testing is important for both compiled and dynamic languages to prevent defects and enforce good factorization, but the benefits to compiled languages can, like many design processes, be moved “left”, into the pre-execution step. As seen here, the simple act of writing tests often leads to conflict with the type system and compiler that can reveal conceptual and design defects in the system being tested.

qaul.net – Strictly Typed Code in a Stringly Typed World

The new qaul.net HTTP API speaks JSON, as do increasingly many things. It allows you to express complex types, it maps well to most programmers’ mental models, it’s self describing, and there’s a decent library for it in every language under the sun. In the Rust world JSON is primarily dealt with using the serde_json crate (https://crates.io/crates/serde_json) which allows the programmer to easily map strictly typed structures into JSON and back. Today we’re going to be talking about the difficulties we encountered building a type for JSON:API’s relationship data field (https://jsonapi.org/format/#document-resource-object-linkage).

The data field of a relationship can be any of the following things:

  • non-existant
  • null
  • a single object
  • []
  • an array of objects

Each of these options semantically represents a distinct thing and so we should be able to tell them apart. We will use the following enumto represent our value:

#[derive(Serialize)]
#[serde(untagged)]
enum OptionalVec<T> {
  NotPresent,
  One(Option<T>),
  Many(Vec<T>),
}

Non-existant relationships will be represented by OptionalVec::NotPresent, to-one relationships (empty or otherwise) will be represented by OptionalVec::One, and to-many relationships will be represented by OptionalVec::Many.

Serializing

To allow our enum to serialize properly we just need to implement a method to tell if it’s supposed to be present or not:

impl<T> OptionalVec<T> {
  pub fn is_not_present(&self) -> bool {
    match self {
      OptionalVec::NotPresent => true,
      _ => false,
    }
  }
}

Now when we wish to use our enum we simply need to put the #[serde(skip_serializing_if = "OptionalVec::is_not_present")] attribute before the field.

Deserializing

To cover the OptionalVec::NotPresent case we will need to implement std::default::Default for OptionalVec as follows:

impl<T> Default for OptionalVec<T> {
  fn default() -> Self {
    OptionalVec::NotPresent
  }
}

Now whenever we use OptionalVec we need to add the #[serde(default)] attribute to the field. This tells serde to fill the field with a default value if the key isn’t present.

For the other options, we need to implement a custom deserializer. The technically proper way to do this is to build a Visitor, but we’re going to take the simpler route and deserialize it to serde_json::Value first. Our deserializer is as follows:

impl<'de, T> Deserialize<'de> for OptionalVec<T>
where T: DeserializeOwned {
  fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
  where D: Deserializer<'de> {
    let v = Value::deserialize(deserializer)?;
    match serde_json::from_value::<Option<T>>(v.clone()) {
      Ok(one) => Ok(OptionalVec::One(one)),
      Err(_) => match serde_json::from_value(v) {
        Ok(many) => Ok(OptionalVec::Many(many)),
        Err(_) => Err(D::Error::custom("Neither one nor many")),
      },
    }
  }
}

And that’s it! Effectively we try first to deserialize the singular case and if that fails we try to deserialize the multiple case. The first case will catch null as Option<T> will deserialize null as None.

Conclusion

This is just one of the many challenges we faced writing a framework for JSON:API parsing in Rust. The contents of this article in their proper context can be found here: https://github.com/qaul/json-api/blob/master/src/optional_vec.rs

Load-correlated distributed bandwidth analysis for LibreMesh networks – #3: Completed test network and broadened scope of the work

The planned test network has been built, employing both fully supported (I just documented them in the tested routers list here) and common home routers (officially unsupported by LibreMesh but supported by OpenWrt).

Employing non supported routers required an expansion of my previous work about making possible an AP-sta (point to multi point access point to clients) network architecture (instead of the default IEEE802.11s mesh). My previous solution relied on BMX6 which will not be included in the next release, in favor of Babeld, so the problem is open again. I provisionally managed to have Babeld on AP and client interfaces adding the following setting in /etc/config/lime on the access point:

config wifi 'radio0'
     list modes 'apname'
     option country 'ES'
     option channel_2ghz '11'
     option apname_ssid 'LibreMesh.org/%H'
     option apname_key 'someAPpassword'
     option apname_encryption 'psk2'
     option distance '100'

 config net 'wirelessap'
     option linux_name 'wlan0-apname'
     list protocols 'babeld:17'

and the following in the /etc/config/lime of the client (taking advantage of the client protocol I added some time ago here):

config wifi 'radio0'
     list modes 'client'
     option country 'ES'
     option channel_2ghz '11'
     option client_ssid 'LibreMesh.org/LiMe-eb7f64'
     option client_key 'someAPpassword'
     option client_encryption 'psk2'
     option distance '100'

 config net 'wirelessclient'
     option linux_name 'wlan0-sta'
     list protocols 'client'
     list protocols 'babeld:17'

For some reason this solution does not propagate the default route obtained from Babeld to the whole network, this does not directly affect my project, anyway I’ll surely manage to fix this in the upcoming days.
In case the usage of such perfectly-working trashware was a blocker, I will receive a few more supported routers in the following days and I will just use those.

Also due to the switch to Babeld, to obtain a complete graph of the network is not yet possible (Babeld being based on the distance vector principle, does not know the whole topography and we’ll have to aggregate it using the new shared-state LibreMesh feature).

During the building of the test network, the planned topography changed a bit resulting in this one (solid lines are cabled connections, directional dotted lines with arrows points from the client to the access point, non-directional dotted lines are proper IEEE802.11s mesh):

All the routers were flashed with OpenWrt 18.06-SNAPSHOT image, which is OpenWrt 18.06.4 with additional fixes appeared in the release branch here compiled locally using OpenWrt buildroot. LibreMesh packages were also compiled in the same process but not included in the compiled image, and installed later using opkg and serving the packages over the local network. This approach showed to be more convenient than expected, additionally, the fallback image is a plain OpenWrt, which decrease the risk of “brikking” the devices.

The complete list of the installed packages from the LibreMesh ones is:

check-date-http first-boot-wizard hotplug-initd-services lime-app lime-debug lime-hwd-ground-routing lime-hwd-openwrt-wan lime-proto-anygw lime-proto-babeld lime-proto-batadv lime-proto-wan lime-system shared-state shared-state-babeld_hosts shared-state-dnsmasq_hosts shared-state-bat_hosts shared-state-persist shared-state-dnsmasq_leases shared-state-pirania

Lately, I got also involved in the development of lime-log-review, which uses liblognorm to decrease the volume of the logs and can be used in my project for storing the key information from the voluminous logs when an incident is detected.

BMX7: Wireguard Tunneling – 2nd Update

The second phase is officially over and within it’s time a lot have happened. First things first, this year’s Battlemesh took place in Paris and it was a blast, the wg_tun plugin saw some great changes and the documentation PR is closed.

For my personal experience on WBMv12 refer to this blog post.

Wireguard on BMX7: wg_tun plugin

Image that shows the successful announcement and reception of keys and ipv6 crypto-addresses.

Initial work on the plugin has been put towards of the first phase and subsequent effort was put before Battlmesh. On Battlemesh along with Axel we created the establishment of a session between two wg-bmx7 nodes.

This work can be found here: https://github.com/bmx-routing/bmx7/tree/WBMv12_session_with_harry

These nodes are able to exchange the public keys of their own interfaces (devices) and assign each other as a peer.

The key decisions of the current implementation are as follows:

  • BMX7 traditionally uses “fd70” as the prefix of an instance’s – auto-assigned on startup – primary IP(v6). The primary IP is the product of this prefix in addition with the first 14 bytes of the public SHA224 key of the interface’s unique identification mechanism.
    In the wg approach we adhere to this scheme and we configure the primary IP of an interface as “fd77” + the 14 bytes of the SHA224 key.
    (Credits to Axel for the idea).
  • The implementation is auto-configurable; meaning that each device that carries a prefix of fd77 when it gets received by other wg-bmx7 interfaces they try to establish a session between them.
  • Cryptographic keys (private and public) exist only in the scope o a single session. Every time an instance gets restarted it spawns new keys.

After Battlemesh and this happily successful result, effort is being put into the handling of some minimal command line arguments as well as the routing of data inside the established tunnels.

Further work will take advantage of the SEMTOR mechanisms provided by BMX7 in combination with the WG plugin.

For more info: either refer to the wg_tun plugin branch, or have a chat with me @luserx0:matrix.org.

BMX7 Github Documentation

The documentation PR was completed and it offers to the GitHub repo a revamped feeling. Details can be found inside the PR.

The total scope of this effort has been to make the BMX7 more approachable by new contributors and users.

The bad

Two goals for this phase it was deemed necessary to push back (and weren’t achieved):

  • Work on mlc to port it’s functionality to mlc-ng,
  • The bmx7 Debian Package.

It’s undoubtedly of utmost importance to have WG secure tunnels working on BMX7 and these goals have been pushed back until secure tunneling is completed.

Final Thoughts and Future Goals

The project itself has proved to be more challenging than expected and it has tested a lot my developer abilities. Hopefully, there is a lot of knowledge to be gained and day by day we get closer to a functioning version of the plugin.

The goals for the final phase should be:

  • The establishment of routing between bmx7-wg instances and the successful completion of the beta.
  • Documentation of the added code and it’s functionality.
  • Proper command-line controlling facilities for users (experienced and not).
  • Research on reuse of cryptographic keys (WG keys with BMX7 keys) and proper handling of the wg interfaces on the bmx7 ecosystem (kernel calls, avl trees and argument hierarchy).
  • Further work on contributor and user documentation.
  • Stretch: Work on MLC to mlc-ng

GSoC 2019 – Upgrading the Meshenger App – Update 2

Meshenger App

In my previous blog post, I had achieved the authentication at the initial handshake in the app. Since then there has been quite a lot of progress in upgrading the Meshenger app.

Progress Till Now

1.) Refactoring of the codebase

Initially, I started with the refactoring of the codebase in order to allow different means of connection to the client. For e.g. contacting a client over the server or Internet, enabling direct calls in layer 3 networks with the help of multicast groups and pim6sd. etc. I am working on enabling the calls over the Internet.

Firstly, I started by removing the “challenge” from the entire codebase. Challenge was used as a security parameter but now as authentication has been developed, it became redundant. Secondly, I refactored the Contact (Client data) and AppData (User data) class to hold different connection data like mac address, port and the hostname. I moved the identifier and address in the “connection_data” ArrayList and stored the data structure serialized as a string in the Contacts database which needed to hold different contact data of the form List<ConnectionData>. I added this data to the QR-Presenter Activity ‘s QR-Code and parsed in the QR-Scan Activity. I followed the same procedure for the AppData database also. Lastly, I also removed “username” and “identifier” from the call JSON.

2.) Implementing client online/offline detection over the Internet

For implementing the client online/offline detection over the Internet, I needed to hold a persistent TCP/IP connection to a signalling server. So I started a thread at the start of the app and that thread opened a persistent TCP/IP socket for each SignalingServer object in the connection_data. The sockets were held open for as long as the app is running. I used a signalling server made on node.js and ran it on the laptop. Then after connecting the phone A and phone B to the laptop’s hotspot, I ran the app on the phone A and the server displayed that the user is online. After that, I checked for the client status by running the app on phone B while still keeping the app on phone A on. The result was displayed that the client was online and the client’s status was detected over the Internet.

Next Steps

The next phase i.e. the Final Phase of GSoC 2019, will be about achieving the call over the Internet, adding other features and some code polishing.

OpenWrt Firmware Wizard Update – 2nd phase completion

After 4 more weeks, there has been progress regarding the agenda. The agenda for the last phase was to create the web interface using which users could download appropriate images for their devices and build custom images too.

What has been achieved

A refined version of the web application has been achieved. The functionality of creating custom images has been added to the application.

The application has been moved to ReactJS. The finished application looks like this:

Note: There are minor bugs and issues in the app which will be rectified in the later versions.

You can look at the code for the interface here and can use the interface here (currently in beta).

Next Steps

In the next phase, an openwrt tool has to be engineered which can be used for upgrading OpenWrt automatically.
A interface for the tool has to be created for LuCI which will house all the settings and preferences of the user. The tool will periodically check if there is a new version of OpenWrt, if so, then it will download and apply the upgrade package automatically.

The foundation to check if there is a new version was laid during the first phase with the JSON metadata.

Retroshare for Android – Update 2

It’s been a month since the last post, so it’s time to summarize this period. In the previous post I showed the beginnings of my adventure with implementing the designed look of the application. That version looked as I intended, but it lacked interactivity, data management and connection to the main retroshare-service application.

The current version of the application has added animations for transitions between screens or tabs. Example animations (due to limited upload size, video is badly compressed 🙁 ):

Also, screens have been added to help various side activities such as creating a new identity or to make it easier to create an account, if it doesn’t already exist.:

And the next thing I spent some time on was the correct handling of queries for retroshare-service and data processing. Thanks to that, in the application we can already perform basic activities such as logging in or managing our identities.

In addition to the activities presented here, I devoted some time to just building a retroshare-service application, which exceeded my expectations about the ease of this task.

Next steps

In the application there are still a few minor modifications to be done. One of the key changes is to ensure proper storage and generation of the authorization token. Nevertheless, the main task of the upcoming period will be to add new functionalities and minor fixes to the Retroshare itself.

Due to the fact that visually the application is already very close to the final product, I would be happy to hear some criticism and hints what needs to be changed.

Cheers,
Konrad Dębiec

conTest – Wireless Testing Framework Second Update

During this coding period I added automatic processing of the data
collected by the wireless testing framework conTest, as well as the graphical
representation of the data.

While processing the data I found some issues in the controller software
as well as the physical testbed.

Most of the tests since the last Blog post were done with the attenuation values
you can see in in the lists below. I changed timing values between the experiments,
to validate the behavior.

Attenuator 1 uses the following values for its attenuation settings:

  • 10 s -> 0 dB
  • 10 s -> 60 dB
  • 10 s -> 0 dB
  • 10 s -> 60 dB

Attenuator 2:

  • 10 s -> 60 dB
  • 10 s -> 0 dB
  • 10 s -> 60 dB
  • 10 s -> 0 dB

My initial experiment to test functionality was not the best one to validate the
testbed (fig. 2). As both paths were basically set to the same attenuation
I only checked the signal in Wireshark as seen in figure 2, which matches mostly with
the black line in the closer look in figure 3.
The second figure was created with my evaluation scripts, while the first one was
created with wireshark during the first tests.

Figure 2: Signal strength as shown with wireshark from original testing
Figure 3: Separated signal strength curves of the original values with evaluation scripts

In the testbed I am using two analogous attenuator in addition to the programmable
attenuator. Figure 4 shows, that one path is stronger dampened. Without the
analogous blocks both paths are nearly equally attenuated, which could mean,
that one of them is broken, or not meant for the frequency range, while both of them
should be. It will need some further investigation, but as I had most of my exams
during this coding phase, I was a little short on time. For now I will continue
without these two devices.

Figure 4: Signal strength difference caused by problems with the analogous attenuator

In figure 4 you can see a section where the one of
the two paths is not following the values presented in the lists at the beginning of this post.

This is a problem in the control software of the digital attenuator. I was able to work around
it and fix two more problems in the software. The actual problem needs more time
researching putting into it, but more than I could afford at this time.

Figure 5: Signal strength curves without analogous attenuator and solved software problem

The evaluation script will check for our default files, filter out data, we want
to plot, like signal strength, selected MCS rate, throughout rate and packet
numbers. Afterwards the plots are constructed with python and matplotlib.

For most figures, places with more intense red and blue colour are values, which
were encountered more often. The black line represents the overall signal strength
provided by tcpdump.

Web Interface for Retroshare – Update 2

I Realized that the visual appearance of the application felt very bland and uninteresting, so I decided to shift some of my focus to the design and visual aspect of the UI. I did plenty of reading about UI/UX design principles and modern best practices during this time. And looks like it turned out pretty well, and is definitely a good improvement from the previous appearance. Also, since this is my first attempt at doing professional-level UX design, there is probably room for improvement, so feedback and suggestions are always welcome.

The general theme has been redone from scratch. I chose this soft blue color palette by taking inspiration from the main app’s look:

The home tab, along with displaying the user certificate, now also allows to add friends by using their certificates. It is possible to add friends by copying the certificate contents, dragging and dropping the file, or simply selecting it from the file manager.

Implemented modal messages within the browser that can be used as a popup dialogue box to display any kind of information (here showing information extracted from a Retroshare certificate):

As you can see, the navbar has also been revamped. And the best thing about it? Icons! I along with my mentors agreed on using the Font Awesome icon library, which is open source (licensed with a combination of MIT, CC 4.0 & OFL 1.1 licenses). I can now utilize icons across the whole app.

The downloads tab has also been redesigned. Now showing all downloads in a slightly different way. This layout was chosen with extensibility in mind, it can easily be extended to contain a additional file-related settings and chunk views by having an expandable options box for each file.

The config tab can now be used to change a lot of the setting options similar to the main app. Network, node, services, files and people sections from the app have been implemented. I will shortly finish the remaining sections too.

Also notice the tooltip icon. Which when hovered on, gives a brief description about the option. Just like in Retroshare:

Next steps

Now that the design is steadily making way for a more detailed and specialized variety of widgets and components, I am working on creating tabs for Network, People, Chats, Mail, Channels & Forums so that the Web Interface can finally become a fully usable alternative to the main client app.

You can try out the Web Interface by cloning it from the repository: https://github.com/RetroShare/RSNewWebUI, and my fork: https://github.com/rottencandy/RSNewWebUI. Again, I am always happy to receive feedback and suggestions for improving the Web Interface.

GSoC 2019 – Import public datasets to Retroshare network second evalutaion

Here again!

This evaluation I spend the work creating an automatically generated wrapper for the API. This wrapper is generated analyzing the Doxygen XML files generated when Retroshare is build. 

Creating the API wrapper

First of all, I modified the python script (made by @sehraf ) that generates the C++ API files, to create a python wrapper for the API. Analyzing the script and the XML files I get my script working generating a first version of the wrapper. Then I test the wrapper, giving support to async functions also. Some features of the wrapper are:

  • Document the code using DocString convention.
  • Parse also ‘manualwrappers’ like attempt login.
  • Requests with authentication and without.
  • Support basic authorization or token auth via ‘Authentication: basic bas64Token’ header.
  • Async methods support and callback implementation.

Here an example of the API wrapper generated: https://gitlab.com/snippets/1877207 . Some tests for the wrapper can be found here:

class TestMultiple(TestCase):
    def test_login(self):
        res = wrapper.RsLoginHelper.isLoggedIn()
        print(res)
        # Do login
        if not res['retval']:
            res = wrapper.RsLoginHelper.attemptLogin(ACCOUNT, PASSWORD)
            print(res)
            self.assertEqual(res['retval'], 0, "CANT LOG IN")
            return
        self.assertEqual(res['retval'], True, "is not loged in")

    def test_authorizedMethod(self):
        res = wrapper.RsGxsChannels.getChannelsSummaries()
        print(res)
        self.assertEqual(res['retval'], True, "Can't get channel summaries")

class TestAsyncMethods(TestCase):
    def cb(self,res):
        print("cb", res)
    def test_asyncMeth(self):
        wrapper.RsGxsChannels.turtleSearchRequest("XRCB", 300, wrCallback=self.cb, wrTimeout=4)

Creating Retroshare Classes wrapper

After that, the problem was that a lot of functions need to have Retroshare classes as parameters. For example, to create a Retroshare forum, are needed classes like RsGxsForumGroup that at the same time need other inner classes like RsGroupMetaData. With the first version of the wrapper all this classes was passed in JSON format that was really annoying to assemble. 
So the next step was parse also this Retroshare classes recursively from the XML files to a Retroshare classes wrapper. On this step it was difficult to parse it correctly: differentiating the different types of classes, and class attributes, translating the types to python, if they are enums, primitive types etc… Finally I created this second class wrapper so when you need to pass a RsGxsForumGroup to the API wrapper you can just instantiate it and the wrapper do all the necessary to transform it to python and call the API. Some features:

  • Parse “compound” classes (structs on C++) recursively.
  • Parse “enums” and get their values.
  • Parse “typedef” and “using” classes and translate it to the appropriate type on Python .
  • Document it using Docstring convention .

Here an example for the class wrapper: https://gitlab.com/snippets/1875153 . Some tests can be found here:

    def test_createChannel(self):
        channelMetadata = RsClass.RsGroupMetaData(mGroupName="TestChdddannelCreation2", mGroupFlags=4, mSignFlags=520)
        channel = RsClass.RsGxsChannelGroup(mMeta=channelMetadata, mDescription="Channel Test")
        res = wrapper.RsGxsChannels.createChannel(channel)
        print(res)
        self.assertEqual(res['retval'], True, "Can't create channel")

For “v2” methods I opened an issue because I can’t communicate with the API. It was resolved as my “retroshare-service” wasn’t updated at all.

Next steps

This script will be adaptable to generate the wrappers for the language needed, for example for an OpenAPI format, TypeScript… Making much easy to other developers to start developing over Retroshare network.

Also it will be very easy to update when new feature is added to the API because it can be generated each time Retroshare is built.

Now will be time to apply the wrapper to the scripts that will import the public datasets!