My vim setup – with some rust specifities

So, I’ve got this thing called a sick leave due ti my depression. It means I have a lot of time to do whatever I want.
It includes writing stuff here, making the garden and wait for plant to grow, spend countless hours on Crusader Kings 2 et Europa Universalis 4 (my life is now gone), and to do computer related things that I wanted to do.
So, among things such as setup a pam-ldap configuration and documentations for a reset project, I’ve started learning Rust. Mainly because it is interesting. It’s nice and good, but it’s a bit hard, lots of concepts are different form my habits with Python. Anyway, I wanted to have a little help provided by context in my vim7 setup.

My previous config did worked quite well for python, but there was a lot of heavy duty plugins on opening files, which led me to have to wait some seconds when initially starting vim. And this is where my quest began. I wanted to do some things from vim (I never really bothered on compiling tasks or ctags for instance, since python or bash do not require them). I spent most of my last week to understanding vim, reading vimscript, finding plugins etc.

Continue reading “My vim setup – with some rust specifities”


Hidden services

So, for those of you who never heard about it, there’s some hidden services in the wild. They’re called .onion if you use Tor – and you should.

Facebook, for instance, also have a .onion. My blog to.

It’s neat, it helps protect privacy of the user and escape mass surveillance and censorship. Anyone should do it if they’re even remotely interested in protecting their users (I mean, even facebook did it. You can’t be worse thanthem on this bsasis, except if you’re a bank).

But, users still need to know that the .onion exist, and they still need to redirect there. And the onion adresses are anything but human friendly. They’re hard to remember, and a mistake in one character might land you on a totally different website.

It would be nice that, the same way HTTPS Everywhere redirects you to https enabled website when you go for the non-encrypted version, there would be some way to redirect users who uses tor to the .onion version.

Onionify all the things

The cloudflare way

So, you can perfectly do the same thing that cloud flare is doing. Get a list of exit nodes, and – on your web-server – when a queries go from one of them, redirect to the hidden services.

It needs an updated list of exit node. Can probably be done, but then you also need control of the webserver (which might not necessarily be the case) and some cron jobs.

I need to do a bit more research on that anyway.

HTTP Headers

You can also probably add a header server side which would advertise the .onion. Or advertise address in DNSSEC zones one way or another. But then, you need the browser to be aware of that and to do those check before going on the website.

I think it’s probably the best way to do it. And it probably isn’t a lot of code (might need to do a plugin for that, to agree with everyone on a standard, and write a RFC).

Plain JS

Or you can control the browser with something on your content whch is aware of the onion. And which can check if the browser is able of using them.

That’s what JS is for. A simple HEAD query sent by the client to the onion will tell you if the client can connect to your .onion.

It’s probably dirty, it’s JS it does asks permission to do it, but the bit of script I’ve write works fine.

It can be embedded on any page to redirect to a hidden service.


The code is straightforward. No dependencies. You do not need jquery for doing just a query, you need XMLHttpRequest.

It ca also be easily adaptable (just change the content of the onion variable), and it works from anywhere your client lands.

Better privacy for the user in 15 lines of JS.

The code is here, licenced under WTFPL. There’s probably way to do it in a cleaner way, and I said earier, I think it would be better to have a .onion dectection feature in the browser, but it’s there now.

And the more you’ll use it, the more people will land on your onions. WHich will improve both Tor network – more casual surf is always good – and the privacy of your users.

Have fun.

GMail … seriously?

[[!meta description="""No, seriously, people are arguing that GMail is in fact a good choice to protect your privacy online. They might be on

GMail: why it’s not a good thing

This post is an answer to jbfavre post[FR], in which he state that – from a metadata point of view, your safer in the mass and so in gmail for instance than if you self host yourself.

In the conclusion he goes on saying that the best choice would be to hand over your mails to associations or small business – which I might agree (under specific concerns).

But he’s not the only one stating that your better with a gmail account than one on your own domain name. manhack and others are also arguing that GMail is best to evade the mass surveillance.

Those person suggest that using GMail, is simple and Google has a lot of cash to invest in security. They’re also trying hard to hinder NSA mass collection of data effort, but I think saying that using Google service is a good way to enforce your privacy is an intellectual bias.

I think this idea come from a misconception of what mass surveillance is. Mass surveillance is the intricate surveillance of an entire or substantial part of a population WP.

On the internet, the mass surveillance is done by a systematic collection of all data and metadata, their archiving and indexing and the fact that action and decisions are made on the results those data will show.

In France, there’s a specific concern because it’s now legal for our government to intercept all the communication and analyze metadata. Then there’s a fallacy stating that if we all use the same host and the same encryption, then it’s impossible for the state to know who’s talking to who and when; opposed to the case where everyone have its own host and its "relatively" easy to know who’s speaking to who and when.

It comes from the fact that, if I’m the only one receiving and sending mail from this computer, then you just need to get the TCP handshake to be sure that someone is talking with me. So it would be safer to have some kind of proxy somewhere, to mutualise those connections and to raise the cost of surveillance isn’t it?

Except that this answer is valid if and only if you have some conditions:

  • The proxy is not itself part of a mass surveillance system
  • The mass surveillance you’re trying to hide from does not go further than just getting the TCP protocol of your connexion
  • Your correspondent also use this sort of mass proxy, or it would be easy to know when he’s talking

So, let’s see what’s the case with gmail.

Is Gmail involved in a mass surveillance system?

The obvious reason would be yes. At least because they can be coerced by the NSA to provide data to the NSA. Even if their was actually few uses of PRISM, the fact that they’re forced by law to collaborate is not a good thing.

You would argue that it’s just the NSA spying on us, they cannot actually do things to you if your not a US citizen which is false. Because there’s at least the Five Eyes coalition, meaning that data gathered on you by the NSA will be shared with other agencies from other government.

Also, I think that saying that NSA mass surveillance has no effect in you is a lack of understanding of what are the impact of mass surveillance, I will not elaborate on that, others are doing that better than me.

But there’s also something else that I want to elaborate, and that we miss in the "governments are evil" stance. It’s the fact that google is collecting and analysing a lot of data. From your GMail data (and metadata) to your search, video historic, or even the blogs you read. They analyse those data and take actions – to present you more accurately targeted advertisement and search recommendation. Basically, they’re doing mass surveillance on their own.

Google is part of the problem. They cannot be a part of the solution to get out of mass surveillance. Sure, they won’t kill someone simply based on metadata you’ll say. But they’re doing something worse, they won’t expose you to information that they deems unrelated to your interests, and you won’t even notice it.

So yes, Google – and Gmail – is part of a mass surveillance system. They might not be collaborate willingly with governments, but they do it at least for their own profit.

Are the mass surveillance system only targeting IP traffic?

We know – since the exposure of a lot of the NSA nasty stuff – that a lot of government have the capacity to intercept traffic on a global scale. The fact that your traffic goes to a datasilo such as google ones, or goes to your own server at home makes no difference, they’re intercepted the same way. What would change is that they would need to get the email metadata from the email you send from gmail, while they do not need to decode them if everyone is on their own box.


They’re already doing that. Equipment setup to break TLS, intercept email communication and compromise your endpoint are already used. So they do not get any benefits to going for something lighter. If you send an email from gmail to another gmail account, those natsec agencies can already read it and extract the metadata they need.

And since stuff like Palantir, hacking team or gamma international are all known companies who are selling solutions to our government. Those solution are based on the infection of your endpoint (your smartphone, tyablet or computer) to not bother with breaking the cryptography of your communications.

After all, if they can read what is displayed on your screen, why should they bother intercepting your TLS connection to a hidden service in Tor?

So, thinking that, being alone on your node, is a compromise on your anonymity is apparently wrong. You do not add metadata to the collection they already have (they already get the headers of your emails, no matter what).

Also, there’s a last one that nobody thinks about. If everyone is on GMail, then you just need to compromise GMail to get all the ddata you need. Just one company. Yes, hacking into Google is something out of my personal scope, but if you’re willing to, you can dot it. It has been done by China before, and I see no reason for things like that not happening again.

Hacking into GMail is just an enormous prize, you get it you can really improve your intelligence. Especially if you stay undetected. Put all one’s eggs in one basket generally ends with an omelette. Even if it’s a titanium basket.

Applying this principle, I then need to have my correspondent apply it

Because communication is – at least – two ways, if you want to protect and hide a communications, you need to protect both ends of communication. So, applying this means that everyone should get a gmail account, because it’s safer for everyone.

I mean, You use GMail and I’m not. I’m running my own mail server. So, you hiding in the crowd does not works, because if I’m getting compromised – and since I do not have Google grade security – you’re being compromised too (after all, they’ll be able to get metadata of the mail you sent me).

So, for this fallacy to be true, you need everyone have a GMail account. Which will makes things worse because, hey, they’re part of the problem – as stated above.

Doing that is exactly than not encrypting data or using Tor because "it would looks suspicious". It does not. Protecting your privacy should not looks suspicious. If you think it is, then it’s kind of too late, you’ve already ate the states toxic memes of security. But let the ones who want to fight them do it.

No, Gmail, Yahoo, Facebook, Twitter, Microsoft or Amazon will not ever be a solution for privacy. They’re part of the problem.

However, there is one specific case where GMail might be a not so bad alternative: throw away mails (as suggestsed by OaklandElle. Besides that? No. It will not improve your privacy, quite the other way around.

Solutions? Stop the dragnet and mass surveillance. Which you can do only at societal and political level. And give a try to the [][] if you’re looking for self hosting, it works. Mostly. It won’t give you better security, but you’ll definetly have better control. And even if you’re still monitored by state, at least you won’t be monitored by an advertisement selling company.

[UPDATE] After talking with jbfavre on twitter, it seems that I didn’t understoof his point. He did not want to advocate for a massive use of GMail as a way of protecting yourself, but rather for small associative clusters.

I think that it’s a good option. Simpler for most people than going full self-hosting, and sufficiently decentralised to hinder the mass collection of data. It’s not the ideal choice – but then we cannot asks high risk people to have their data in their home where it will be seized by cops – but it’s I think a good trade-off between privacy, ease of use and safety.


Welcome to searx

You might have noticed some change on my seeks node since it’s not a seeks node anymore, but instead it’s a searx node.

Searx is a project started by asciimoo after Taziden gave a talk at Camp zer0 about going forward with seeks and opening it up to a wider base of developper.

The idea is that seeks ‑ currently written in hardcore C++ ‑ is a prototype and an exploratory project about search and decentralization of search, and that we can now build almost from scratch a search engine which will implement the concept behind seeks but in a more developper friendly way, for instance in python.

We already had a lot of discussion with people hanging on about this and, technically, there’s two tool to develop. An easily extensible metasearch engine which will feed a DHT of result shared with different nodes.

And then asciimoo wrote searx, a meta search engine, easily extensible. Now, we "just" have to connect it to a DHT. But I’ll save that for later.

So, how did I installed it? I’ve fought a little bit with uwsgi and nginx, but now it works. Here’s how:


Getting the code, the dependencies and everything else

Create a searx user for it’s a good practice (don’t run things as root) and do some git cloning and virtualise you’re environment. Oh, before I forgot, I’m running a debian stable and I try to keep the distribution clean (so no pip install outside of virtualenv)

cd /usr/local git clone chown searx:searx -R /usr/local/searx cd /usr/local/searx virtualenv searx-ve . searx-ve/bin/activate/

Now, you have a running virtual environnement in ”/usr/local/searx/searx-ve” and the code in the parent directory. You need to install some dependencies, so launch that command and go get a cup of coffee.

pip install -r requirements.txt

Now, the code is alive. You can test it by running the flask instance:

python searx/

And you can proxy requests to ”http://localhost:8888” from your favorite webserver. It works.


Since it’s not daemonized, and you’ve got only one worker, I wanted to have something more maintainable. So I needed something like uwsgi (or gunicorn, or whatever) to run the apps right from nginx.

Since debian splitted uwsgi config in a lot of modules, don’t forget to install python module (I was stuck with that a lot). So, let’s install uwsgi and required dependencies.

apt-get install uwsgi uwsgi-plugin-python

Next step is to create an app. In debian, uwsgi has the same apps-{available,enabled} file structure than on nginx or apache. Here’ my config file for searx:

vim /etc/uwsgi/apps-available/searx.ini  [uwsgi] # Who will run the code uid = searx gid = searx  # Number of workers workers = 4  # The right granted on the created socket chmod-socket = 666  # Plugin to use and interpretor config single-interpreter = true master = true plugin = python  # Application base folder base = /usr/local/searx  # Module to import module = searx.webapp  # Virtualenv and python path virtualenv = /usr/local/searx/searx-ve/ pythonpath = /usr/local/searx/ chdir = /usr/local/searx/searx/  # The variable holding flask application callable = app

Once that’s done, symlink this file in apps-enabled and start uwsgi.

cd /etc/uwsgi/apps-enabled ln -s ../apps-available/searx.ini /etc/init.d/uwsgi start

By default, the socket used by uwsgi will be in ”/run/uwsgi/ap/searx/socket”. This is where nginx will chat with uwsgi.


Hard part is done, if you already have nginx installed, just use yet another vhost.

vim /etc/nginx/sites-available/searx  server {     listen 80;     server_name;     root /usr/local/searx      location / {             include uwsgi_params;             uwsgi_pass unix:/run/uwsgi/app/searx/socket;     } }

Then activate the newly created sites and restart nginx.

ln -s /etc/nginx/sites-{enabled,available}/searx /etc/init.d/nginx restart

And go visit or whatever your FQDN is) on port 80, it would works.

I suggest you to install some SSL? but it’s beyond the scope of this tutorial.

TBS – Distributing Transcoding

The issue at hand

Recently I’ve worked a lot on adding content to the TBS by parsing the intertubes auto magically. Fr instance, I have a tumblr and a twitter parser who allows me to gather data (especially in Egypt for instance). Even if those parsers are stupid, they works.

Another one I wante dto add, is the bambuser one. It’s a streaming services used a lot by people in Middle East to broadcast covergae of protests. The Bambuer team is great, they already provided us an API key for the first versions of the TBS, but they mainly use flv format for videos.

And I want the TBS to be without flash, so it means HTML5 formats, and there’s three of them: OGG (.ogv), WebM (.webm) and MP4 (.mp4). FLV is neither of those one.

I usually used to transcode them as a celery tasks, righ on the TBS, but the bambuser parsers gaves me 223 videos to transcode, and given my current configuration, and the CPU power needed to transcode from flv to ogv – it actually can take more than 4 days per video – I was stuck.

Also, since I don’t have a lot of CPU cores, I had only one celery worker, so the broadcast wasn’t updating itself, which was a shame.

Distribute work

So, the solution is to not transcode those videos myself. And that’s were you can help. I’ve wrote a little webservice, using tastypie RESTFull API.

The principle is simple, you ask for a job, download the flv vids from my server, transcode it in one of the three HTML5 video format, md5sum it, put it somewhere I can retrieve it (a publicly accessible http/https server will be good) and then PUT me an update.

See? Simple.

SO, let’s get into the dirty details.

First, you ask for a job to do by hitting this link:

It will answers you with a job to do:

{   "objects": [     {       "id": 399,       "md5sum": "dce2d12c90cfef2c78b6c5bde98b4c2c",       "resource_uri": "/tsc/v1/jobs/399/",       "start_time": "2013-09-18T16:16:32.587953",       "state": "p",       "token": "u5d98hOslRQbMJRVtCl6ocLzX5xeCFbneij75Y8j",       "uri": ""     }   ] }

id: is the id of the job. md5sum: is the checksum of the file you need to transcode _resourceuri: the URI you can use to check the details of the job (appends it behind . It’s also where you’re going to need to put stuff into, after you’ve done the job. _starttime: is the time at which the jobs has been created. usuallay, you should have the oldest one to do. state: give you the current state of the job. It’s p in this case, because the job is in Progress (since you’re going to do it) token: it’s the token associated to this job ID, and it’s how I’ll fight spam. If you do’nt have the job ID and the token, then you can’t PUT anything. uri: is the absolute URI of the file I need you to transcode. Just GET this file.

And that’s all. You can now transcode the file. For the sake of giving an example, I’m generally using ffmpeg and I invoke it like that:

ffmpeg -i input_file.flv output_file.ogv

It’s enough, but if you’re a ffmpeg Guru, you can probably find better ways. I try to stay as close as possible from the original format (in size especially), but a 320×240 size shoudl be enough if you really need a size.

I tend to prefer ogv over webm and mp4, for it’s the most free codecs of the three, but do what you think is best I can manage the 3 of them.

Once you’re done, send me a PUT on the resource_uri using only three args.

Technically, add the ‘Content-type: application/json‘ header to your query. And the body needs to be a JSON formatted content, with only those three fields:

{     'md5sum': "The md5 hexdigest hash of your transcoded file",     'token': "The token associated to the job",     'uri': "the URL whee I can get the file you transcoded" }

Every other field, will leed to an error.

Once I got the PULL request, I’m going to GET your file. It would be nice to give me the ‘Content-type‘ header associated to the file. In fact, if it’s not one of ‘video/ogg’, ‘video/webm’, ‘video/mp4’ then, I’ll drop the file and will reinitialise the job for someone else to do it. So, please, set-up your webserver accordingly.

And once it’s done, you can get back to /todo and start another job.

If no more jobs are available, you’ll get a 404. Then wait for some time (days or hours) for new jobs to transcode.

And a wild client appears

I was working with CapsLock at night to bootsrap a client to automagically do all the stuff.

You’ll need ffmpeg − and, it seems you need to have a more recent than the one in Debian − and some basic python tools to run it.

Then just:

git clone

And then run it using python in a classical fashion.

Neat, isn’t it? Now, you have no excuse for not helping to transcode the datalove.

If you have any questions, just ping me.

Thank for your help, your cores and your bandwidth. Datalove uppon you.

— UPDATE [2013/09/21]: One of teh field needed for the PUT (namely hash was wrong) UPDATE2 [2013/09/21]: Add the git repo for the client

Building OpenWRT to have PirateBox working on TL-WR703N v1.7

It started with a workshop

With some friends, we decided to have a workshop around the [Piratebox][], so we ordered a lot of TP-link WR703n and started to flash them.

They are labelled as 1.6 revision, but we discovered it the hard way they’re not (worse, some of them actually are, and we were lucky on the first one we tried). So, basically we created some bricks and people were going home without their PirateBox, which is sad.

The trunk was building fine, but the snapshots on were built without USB modules, and they are mandatory for the PirateBox to works. I had a host with the full openwrt toolchains, so I started playing around with it and, finally, built a workable firmware for this hardware revision.

Work in progress

How canI use it

It works almost like on the original tutorial except that the firmware you need to download is this one and, that on the steps Install Piratebox you need to change the command issued on step 2 like this:

cd /tmp opkg update && opkg install --force-depends

Note the force-depends added at the end of line. It is mandatroy, because I build the binary ‘losetup’ inside busybox, not as a package, so opkg won’t find it.

You will have some error message written, speaking about missing dependencies, but you can ignore them.

Reboot your routeur, and now, everything should works.

Want to build your own?

So, in caseyou wantto have fun with the openwrt toolchains, I’ve pushed my openwrt env in gitorious

Yubikey required at boot

Update (02/11/2012) I added the ‘ask a passphrase’ functionnality in the hook.


As you might already know, I have a yubikey I use as an authentication token. Without it, I cannot log on my computer as a normal user.

But I wanted to do more than that. Like, blocking the boot if the key is not present, unmounting encrypted drive by removing the key, etc.

In this post, I’ll show you how I’ve tweaked my initrd system to stop booting if I haven’t plugged in the key. I’m using the basic kernel from arch linux, and the mkinitcpio system that is shipped in this distribution.

However, the scripts mught be easy to port to a different one.

Writing hooks

I needed a new hook for that. This hook will be responsible of embedding the necessary binaries and modules, and to run them at boot.

The Arch wiki has a page about writing some custom hooks. It just need two non-executable scripts. The neat thing is that those script will embedd all required dependencies when creating the image.

So, use your editor of choice and create the first file /usr/lib/initcpio/hooks/yubikey and paste this content in it:

\#!/bin/bash  \# Use y2kchalresp to test if the yubikey is present run\_hook() {     local CHAL YCHAL PASS TRIES OK     msg ":: Loading necessary modules for yubikey..."     /sbin/modprobe hid\_generic      sleep 2

First, we need to load the required modules. dmesg tolds me that this is the module hid_generic (quite expectable since the key actually is a usb keyboard). I need to sleep a little bit, to give time to the USB bus to detect the key. In case your system doesn’t detect the key, you might need to increase it.

    TRIES=0     OK="KO"     CHAL="thechallengeresult"     while [ $TRIES -lt 3 ]     do         read -p "Enter your yubikey passphrase: " -s PASS         YCHAL=$(ykchalresp -2 "$PASS")

This is the crypto part of it. CHAL contains the expected result challenge (that is the result of the command runned in YCHAL), the PASS is the challenge submitted to the key and YCHAL is the command sent to the key to have an answer from it.

We also start a loop to grants you the ability to mistype your password. The call to read with the -s flag is used to define a passphrase and to not display what you’re typing.

        if [ "$CHAL" != "$YCHAL" ]         then             err "Challenge Response with yubikey failed"             ((TRIES += 1))         else             msg "Challenge Response with yubikey correct"             OK="OK"             break         fi     if [ "$OK" != "OK ]     then         exit 1     fi }

If everything is ok, CHAL and YCHAL are equals, and you can process to the end of the boot. Else, you increment TRIEs, and you loop. If tries is greater or equal to 3, then you end the loop.

At the end of the loop, if OK doesn’t contain OK, then exit, else continue the normal boot process.

The second needed file require by mkinitcpio, in the /usr/lib/initcpio/install/yubikey script.

#!/bin/bash  build() {     add_module hid_generic     add_binary /usr/bin/ykchalresp     add_runscript }

The build function is called to pack everything in the initrd. We need a module and a binary, so we add them here. And then the add_runscript function tells mkinitcpio that there is a script in hooks/yubikey to be included.

help() { cat <<HELPEOF     This hook tries to lock the computer at boot if no yubikey is inserted HELPEOF }

The help function just display a message when you want to know what this hook is about.

Then, just add the yubikey hook in your HOOKS array, edit /etc/mkinitcpio.conf and add it after the usbinput things.

And rebuild the initrd.

mkinitcpio -p linux

And now, on boot, you will need your yubikey plugged in.

VPN in a pocket

About the so-called Pirate Box

Everything started when I found not less than three pirate boxes running at the PSES 2012 conferences and all of them were unaware of the two other. Worse, you could connect to one piratebox or to the internet, but not both, because pirate box runs off-line.

And this is the main problem of this thing. I mean, if I want to download and share, I use the bittorent system, you shouldn’t be afraid of the legal consequences of the act of sharing things you like.

But still, those wireless router are damn small (they literally fit in a hand), they need not much power to run and they have some interesting routing capabilities (multiple SSID, bridging, meshing, you name it) and I was thinking that, deploying this kind of hardware cold be a way to cover areas with poor connectivity and works collaboratively to route packets. This is pretty much how the internet works.

So, I was thinking about a meshed network of sharing content boxes that could access to the Intertubes and share this access. But accessing the clearternet is not interesting. With some Telecomix folks we think and works a lot around darknet and weird protocols, because they are fun. And right now, we are working with cjdns – which is not about DNS. Also, a box already configured offering to everyone an access through a VPN can remove the pain of configuring it for non tech-savvy users, and so to have more people using darknets and vpn.

And I have a TP-Link WR703N dedicated to this experimentation.


Before everything, we need to flash a firmware onto the small router (there’s only 4MB of disk to store everything, it’s quite tight). I used the sysupgrade for Attitude adjustment image (and found my way through the Chinese menu). Nothing specific here,the device works perfectly fine

Routed AP

Then I wanted that my box connect to a LAN (connected to the clearternet), to set up an Access-Point and to route everything that come from the AP to get through the LAN and then to the darknet (configured to work over the clearternet as a darknet usually do)

Quite easy, since there’s a recipe for it in the openwrt wiki. However, I did changed some things, so let’s review the different files one after the other.


config wifi-device radio0 option type mac80211 option channel 11 option macaddr ec:17:2f:e0:44:52 option hwmode 11ng option htmode HT20 list ht_capab SHORT-GI-20 list ht_capab SHORT-GI-40 list ht_capab RX-STBC1 list ht_capab DSSS_CCK-40

Nothing specific here, the default are good and I don’t need more.

config wifi-iface option device radio0 option network wifi option mode ap option ssid ChaosBox option encryption none

First interface, configured as an open AP in a dedicated network and without a key. I want everyone to be able to use my VPN without having to found a key.

config wifi-iface option device radio0 option network babel option mode adhoc option ssid ChaosBabel option encryption none

And since I can do multiple SSID on the box, I will use this later for meshing the ChaosBoxes together (and using babel, because it works out of the box). It works, but I haven’t tested it, so it will be the subject of a different post.


config interface ‘loopback’ option ifname ‘lo’ option proto ‘static’ option ipaddr ‘’ option netmask ‘’

Loop back interface.

config interface ‘lan’ option ifname ‘eth0’ option type ‘bridge’ option proto ‘dhcp’

I move the default configuration (static) to a dynamic one. I will then benefit of what the LAN I’m connected onto will offer, notably a gateway to the internet. And probably some DNS cache.

config interface ‘wifi’ option proto ‘static’ option ipaddr ‘’ option netmask ‘’

This is my wireless network, the interface corresponding to the wireless device configured in AP mode. I will use the network, mostly because the 192.168 ones are over-common and I do not want to have a problem with that.

config interface ‘tcxnet’ option proto ‘none’ option ifname ‘tun0’

This one is mainly here to define things that I’ll later use in the firewall.


config defaults
option syn_flood 1
option input ACCEPT
option output ACCEPT
option forward REJECT

So, defaults. They are good and protect a little bit your box.

config zone
option name wifi
option network ‘wifi’ option input ACCEPT option output ACCEPT
option forward REJECT

The zone for all the traffic coming from the wifi network.

config zone
option name lan
option network ‘lan’
option input ACCEPT
option output ACCEPT
option forward REJECT
option masq 1
option mtu_fix 1

The zone for all the traffic coming from the lan. Well, nothing will really come from it but you see what I meant. However we want to masquerade (after all, you can probably found things like a mpd or a nfs share on the lan).

config zone
option name tcxnet
option network ‘tcxnet’
option input ACCEPT option output ACCEPT option forward REJECT
option masq 1
option mtu_fix 1

This zone is for everything going through the tcxnet interface (that will be our cjdns). As for the lan, and since we want to use services inside the darknet, we will masquerade.

config forwarding
option src wifi
option dest lan

config forwarding
option src wifi
option dest tcxnet

And now, let’s forward the traffic through both the lan and the tcxnet zone.


[…] config dhcp wifi option interface wifi option start 100 option limit 150 option leasetime 12h

This is the only dhcp pool I have. I want to address the wireless part. 50 address should be enough.

More info

For more info about those configurations, you should read the openwrt wiki

The fun parts


Now, the real fun begin. First, let’s install CJDNS. Quite easy thanks to the build made by fremont:

opkg update && opkg install –force-depen ds

I use the force-depends flag, for nacl and kernel version on attitude adjustment because they will raise some unneeded conflicts.

And then, following the instructions available in the cryptoanarchy wiki, generate a configuration, add peers and start cjdns:

cjdroute –genconf > /etc/cjdroute.conf

cjdroute < /etc/cjdroute.conf > /dev/null &

No logs, sorry, I haven’t the room for that. Plus I do not likes it.


I’ve tried a lot of things, and it appears that the way to have it working is to simply use a SOCKS proxy and to connect through it.

I’ve installed srelay because it appears to works simply. And to fit in the 4 MB space I have.

opkg install srelay

We need to configure it to get it working, edit the /etc/srelay.conf file delete everything and have it looking like that:

allow local subnet to access socks proxy any

Then just start srelay using the automagick init.d script:

/etc/init.d/srelay enable /etc/init.d/srelay start

It will start on the 1080 port on your openWRT box.


Now, start a computer, activate wifi, connect to the ‘ChaosBox’ ESSID and ask for an IP via dhcp.

Start a browser and configures it to use a SOCKS 5 proxy and use the parameters used to start srelay. The proxy address is and the port is 1080.

You have to disable the option to forward the DNS queries through the proxy for srelay can’t understand them yet. Also, you have to check that your DNS resolver has been set-up by dhcp and is ‘’. If it’s not,edit your /etc/resolv.conf file and add this line on top:


Now, you have two tests to run. First the plainternet, test to load the page. If it works, go on the second test.

Try to use the darknet. If you’re connected to the Hyperboria darknet, you can test going on Nodeinfo.hype: http://[fc5d:baa5:61fc:6ffd:9554:67f0:e290:7535]/.

If it works, congratulations 🙂


Why don’t you NAT?

Well, I tried. CJDNS address are in ipv6. So, I’ve choosed an ipv6 prefix, anounced it to be served in the wifi interface and tried to route through cjdns. However, the source IP mismatched.

And ipv6 NAT are out of the table for openWRT. So, I was unable to do it that way.

Why didn’t use Tor?

Simple, openwrt + Tor (in fact the libcrypto) are overweighted and go beyong 4 MB. So, I’ll had to use an external storage connected on the USB port. But then, the power consumption will go high. Also, I need an external devices connected, that can be separated from the router.

You spoke about mesh before?

And you didn’t see it. Yep, I need to do that. But tunneling through cjdns was such a pain. But babel works quite easily.

EDITED 08/17/2012 I changed a little bit about the srelay configuration, did not work as expected at first.

EDITED 09/13/2012 I updated the client configuration part since srelay can’t forward DNS queries. Also, we did some tests at Le Loop yesterday evening and meshing is quite advanced now, I’ll do a post to that at a later time.

EDITED 26/11/2012 The URL for the ipk has changed

How did I streamed the last JHack conference


So, yesterday, the regular Jhack crew set-up an event with Richard Stallman to talk and exchange around the issues involving Free Software and Human rights.

And, as we want to build and keep history (also, it was a week day, so some people can’t come physically to the nice place we’ve had for the occasion), we wanted to stream.

When it come to streaming something, it usually sum-ups to having a cam, connected to a laptop of a sort and which then send it over a more or less closed source application. Everything ending on the web in a flash player (website like Bambuser or Ustream are doing a great job to broadcast video from revolutions, but I cannot see the video there for I have no flash, please people, think HTML5 now, also this is why [TBS][] uses HTML5 and not a flash player).

And I do not wanted that. There might be a way to do it, without using the horrible command line tool gstreamer (I cried tears of blood last time I wanted to use it).

Also, I was surrounded by apple products (Journalists, changes your habits! I cannot works like that anymore), none of them being able to be used as I wanted to (meaning, just do something without Apple software). The last thing I add was a laptop with a small cam and an internal mic.

Tools of the trade

Since we were looking for a streaming solution in #opSyria, a part of the preliminary research had been made, so here are the tools that was needed to stream:

  • A laptop running GNU/Linux (Ubuntu, not my favorite favor, but let’s deal with it) and with included microphone and webcam.
  • VLC, because when you need to do some video/sound it is a good tool
  • Network connexion. Ethernet over RJ45 with a steady bandwidth is generally a good idea.
  • A server to stream to, with a good availability. My choice is, free streaming tool. It is based on icecast and can stream .ogg (free container)

 Assembly everything

Once you’ve find all of the above, the worst oart is done. If you have a powerful laptop, you can even record the stream locally, wasn’t needed here since we’ve got a camera crew working on it.

  1. Plug your computer into the network, start it and launch VLC.
  2. Visit and create a channel for your need. They will send you all the needed informtion for you to stream.
  3. In VLC go in File > Stream, choose your physical device (nowadays, most probably a video4linux2, the cam is ususally in /dev/video* and the sound is your ALSA card (probably :hw0.0). Click on stream
  4. Check the display locally check box, extremely useful to monitor and check everything is ok. Stream to a shoutcast server, feel in the details has send to you.
  5. You want to transcode to a set of codecs of choice (free one, my choice is Theora / Vorbis)
  6. Click on Go. The streaming will start. Go on your interface page on and say ohai to the camera, you’re on the TV \o/


I had some pain to manage the network over there (not mine, they’re not used to weird people doing strange things with network) and with the CPU power needed to transcode. My good old netbook wasn’t powerful enough.

The quality was awful, due to the fact I have nothing best than internal devices. For the next time I need even a cheap jack microphone and a webcam that I could use to zoom on the subject and have better than 2.3 Mpixels.

Also, I need to plug the power cord into a power plug that is actually connected to the electrical network. I have to set this in a bit of a rush and that totally slipped of my mind.

I also need to find a way to do it from the command line. But it works. It’s dead simple and it’s free. So now, you have no excuse.

If you want a shiny design around these, just put some CSS and HTML around, and it would be enough. But get rid of Flash.

Yubico, PAM, and Challenge/response Authentication

Introducing the yubikey

The yubikey is a small device that act as a token generator for authentication system. Yubico build them and, as they’re seen as a Universal Keyboard, they can be easily interfaced with any kind of system.

From generating OATH token, to One Time Password systems, going by Radius and OpenVPN server authentication, they can be used for a lot of funny things and, among other thing, it’s free software (not free hardware, alas). The token is at $25 and you can order them by huge quantities.

Simply put, it’s a good token for it’s price and, given my threat model (my computer being stolen) it is enough.

So, some disclaimers.

  • I have no interest in the yubico company or any of their software.
  • You can end permanently locked out of your stuff if you lose your key and if it’s the only way you have to login. But, it’s what I’m looking to achieve.
  • I am not a security expert. I haven’t notice any obvious security flaw, that does not mean there is not. However, the yubikey seems to do the job.
  • I use Archlinux, and the AUR. You’ll have to adapt things for your distro, but you’re a grown up now, it should not be a problem.
  • The challenge-response mode described here, is only available on Yubikey 2.2 and later.

What are we going to do

The first thing I wanted, was to lock my computer when the key is away. The simple thing is to launch a xlock on running X servers. It’s far from perfect, but if I can do this, I can do more.

The second thing I wanted was to be able to forbid login to people who lack either the key or my user password, a classic Two-factor authentication. But I wanted to do that offline, and without using the static key configuration of the yubikey.

But first, I need some packages, so let’s do some yaourt.

[okhin@tara.sunnydale]$ yaourt -Sy libyubikey pam_yubico ykclient ykpers

The first and second packages, are needed for pam, the last ones are needed for using your key. It seems that some tweaking may be necessary in the PKGBUILD file of pam_yubico. I have change the –with-pam-dir options of the configure invocation to be /usr/lib/security and I added _CFLAGS=-DHAVE_LIBYKPERS1 to the make invocations.

 Configuring udev

So, first thing to do for xlocking everything when removing the YubiKey is to add some udev rules. On my Arch system, they’re located into /usr/lib/udev/rules.d and it’s recommended to use a low priority one, so let’s edit the 99-yubi.rules file in this dir. I just need to rules:

ATTRS{idVendor}=="1050",ATTRS{idProduct}=="0010",GROUP=yubi,MODE="0660" SUBSYSTEM=="usb",ACTION=="remove",ENV{ID_VENDOR}=="Yubico",RUN+="/usr/local/sbin/xlock-yubi"

The first one is a classic Udev rule, and you’ll need to create a group named yubi and to add users who’ll configure the key in this group.

The second one is a bit tricky. The yubikey is detected by the system as 3 devices (on usb, one input and one hidraw), and, if you do not add the SUBSYSTEM part, you’ll have to go through 3 xlock screens before unlocking your device. It’s not that good.

The other weird part is that, when configuring or dealing with your yubikey, the tools scan for the key, and so remove the input/hidraw part of it in udev before adding them back. The subsystem that get disconnected only when you remove the key of your computer, is the usb SUBSYSTEM.

And, for the script, well, do whatever you want in it. It’s not the topic of this post, maybe later.

So, now, when you’re going to get your key out of a USB slot, it will call the script. At least, once you’ve reloaded the udev daemon:

[root@tara.sunnydale] # udevadm control --reload

There’s also a udevadm monitor command that is quite handy when debugging udev rules.

Set up the key

Ok, now, when you unplug any Yubico branded devices, you’re going to lock your screen. We’re going to move into the fun stuff now.

There’s a command for customizing your yubikey. You have to know that this key can handle two different configuration. I’ll use the second one, keeping the first one for other purposes yet to find.

So, let’s burn a new configuration for activating challenge-response:

[okhin@tara.sunnydale] $ ykpersonalize -2 -ochal-resp -ochal-hmac

It will ask you for a AES passphrase, I used one generated by the yubikey (by pushing the button), but feel free to use what you want. You won’t have to use it again, since the AES key will be stored on the yubikey and that no one will be able to read it anymore.

Next options, is to generate the pam configuration for the challenge, and we need a ~/.yubico dir for that. Protect the files inside this directory, for they contain the challenge.

[okhin@tara.sunnydale] $ mkdir ~/.yubico

And then, run this utility to configure the challenges that will be used by pam.

[okhin@tara.sunnydale] $ ykpamcfg -2 -A add_hmac_chalresp

You’ll have a file named challenge-KEYID in your ~/.yubico directory. It contains the file you need.

If, like me, you have an encrypted /home that is mounted using pam_mount at login, you cannot use this configuration. So, creates a world read-writable directory where you’ll store your challenges.

[root@tara.sunnydale] # mkdir /etc/yubico/challenges -p

And then, move your file in it, keeping a 0600 mask and the ownership correctly set-up (that is, only the user that will use this key should be able to read it). Replace the challenge part of the name by the username:

[okhin@tara.sunnydale] $ mv {~/.yubico/challenge,/etc/yubico/challenges/okhin}_KEYID

And now, we just have to play with pam.

I wanted to force users on my graphical login manager to have a key. And to enter their Unix passphrase (I use it to mount my encrypted /home) at prompt. Both conditions being required to get a login.

So, in my /etc/pam.d/slim file I’ve added this line just above the pam_unix module:

[...] auth    required mode=challenge-response chalresp_path=/etc/yubico/challenges auth    required nullok [...]

If you want to consider that having the yubikey is the only necessary thing, then change the required by sufficient. You have to know that no password will be asked for. As soon as the yubikey is plugged into your computer, knowing your login name is enough to get access to a session, and it is a security risk.

Relaunch your session-manager and window-manager, plug your key inside your computer, and login. It will asks for your username and password, as usual. However, if you haven’t got your key plugged into your system, then you’ll be unable to login.

Congratulations, you’re done. Try to keep a way to still log into your system, in case you lose your key.

You can also have different key for one user (just add new challenges file). And you can probably have one key for different user (didn’t test that).

What’s next?

I need to change my xlock script to log me out of the box, when the key is unplugged. I need to figure a way to use the yubikey challenge-response mode with system like luks or GPG.

Also, I’d like to use to remotely connect on VPN or SSH, but I need to look into those HowTos. If some of you wanna give it a shot, you know how to reach me.