VPN in a pocket

About the so-called Pirate Box

Everything started when I found not less than three pirate boxes running at the PSES 2012 conferences and all of them were unaware of the two other. Worse, you could connect to one piratebox or to the internet, but not both, because pirate box runs off-line.

And this is the main problem of this thing. I mean, if I want to download and share, I use the bittorent system, you shouldn’t be afraid of the legal consequences of the act of sharing things you like.

But still, those wireless router are damn small (they literally fit in a hand), they need not much power to run and they have some interesting routing capabilities (multiple SSID, bridging, meshing, you name it) and I was thinking that, deploying this kind of hardware cold be a way to cover areas with poor connectivity and works collaboratively to route packets. This is pretty much how the internet works.

So, I was thinking about a meshed network of sharing content boxes that could access to the Intertubes and share this access. But accessing the clearternet is not interesting. With some Telecomix folks we think and works a lot around darknet and weird protocols, because they are fun. And right now, we are working with cjdns – which is not about DNS. Also, a box already configured offering to everyone an access through a VPN can remove the pain of configuring it for non tech-savvy users, and so to have more people using darknets and vpn.

And I have a TP-Link WR703N dedicated to this experimentation.


Before everything, we need to flash a firmware onto the small router (there’s only 4MB of disk to store everything, it’s quite tight). I used the sysupgrade for Attitude adjustment image (and found my way through the Chinese menu). Nothing specific here,the device works perfectly fine

Routed AP

Then I wanted that my box connect to a LAN (connected to the clearternet), to set up an Access-Point and to route everything that come from the AP to get through the LAN and then to the darknet (configured to work over the clearternet as a darknet usually do)

Quite easy, since there’s a recipe for it in the openwrt wiki. However, I did changed some things, so let’s review the different files one after the other.


config wifi-device radio0 option type mac80211 option channel 11 option macaddr ec:17:2f:e0:44:52 option hwmode 11ng option htmode HT20 list ht_capab SHORT-GI-20 list ht_capab SHORT-GI-40 list ht_capab RX-STBC1 list ht_capab DSSS_CCK-40

Nothing specific here, the default are good and I don’t need more.

config wifi-iface option device radio0 option network wifi option mode ap option ssid ChaosBox option encryption none

First interface, configured as an open AP in a dedicated network and without a key. I want everyone to be able to use my VPN without having to found a key.

config wifi-iface option device radio0 option network babel option mode adhoc option ssid ChaosBabel option encryption none

And since I can do multiple SSID on the box, I will use this later for meshing the ChaosBoxes together (and using babel, because it works out of the box). It works, but I haven’t tested it, so it will be the subject of a different post.


config interface ‘loopback’ option ifname ‘lo’ option proto ‘static’ option ipaddr ‘’ option netmask ‘’

Loop back interface.

config interface ‘lan’ option ifname ‘eth0’ option type ‘bridge’ option proto ‘dhcp’

I move the default configuration (static) to a dynamic one. I will then benefit of what the LAN I’m connected onto will offer, notably a gateway to the internet. And probably some DNS cache.

config interface ‘wifi’ option proto ‘static’ option ipaddr ‘’ option netmask ‘’

This is my wireless network, the interface corresponding to the wireless device configured in AP mode. I will use the network, mostly because the 192.168 ones are over-common and I do not want to have a problem with that.

config interface ‘tcxnet’ option proto ‘none’ option ifname ‘tun0’

This one is mainly here to define things that I’ll later use in the firewall.


config defaults
option syn_flood 1
option input ACCEPT
option output ACCEPT
option forward REJECT

So, defaults. They are good and protect a little bit your box.

config zone
option name wifi
option network ‘wifi’ option input ACCEPT option output ACCEPT
option forward REJECT

The zone for all the traffic coming from the wifi network.

config zone
option name lan
option network ‘lan’
option input ACCEPT
option output ACCEPT
option forward REJECT
option masq 1
option mtu_fix 1

The zone for all the traffic coming from the lan. Well, nothing will really come from it but you see what I meant. However we want to masquerade (after all, you can probably found things like a mpd or a nfs share on the lan).

config zone
option name tcxnet
option network ‘tcxnet’
option input ACCEPT option output ACCEPT option forward REJECT
option masq 1
option mtu_fix 1

This zone is for everything going through the tcxnet interface (that will be our cjdns). As for the lan, and since we want to use services inside the darknet, we will masquerade.

config forwarding
option src wifi
option dest lan

config forwarding
option src wifi
option dest tcxnet

And now, let’s forward the traffic through both the lan and the tcxnet zone.


[…] config dhcp wifi option interface wifi option start 100 option limit 150 option leasetime 12h

This is the only dhcp pool I have. I want to address the wireless part. 50 address should be enough.

More info

For more info about those configurations, you should read the openwrt wiki

The fun parts


Now, the real fun begin. First, let’s install CJDNS. Quite easy thanks to the build made by fremont:

opkg update && opkg install http://v4.seanode.meshwith.me/openwrt/ar71xx/packages/cjdns_0.4-SNAPSHOT_ar71xx.ipk –force-depen ds

I use the force-depends flag, for nacl and kernel version on attitude adjustment because they will raise some unneeded conflicts.

And then, following the instructions available in the cryptoanarchy wiki, generate a configuration, add peers and start cjdns:

cjdroute –genconf > /etc/cjdroute.conf

cjdroute < /etc/cjdroute.conf > /dev/null &

No logs, sorry, I haven’t the room for that. Plus I do not likes it.


I’ve tried a lot of things, and it appears that the way to have it working is to simply use a SOCKS proxy and to connect through it.

I’ve installed srelay because it appears to works simply. And to fit in the 4 MB space I have.

opkg install srelay

We need to configure it to get it working, edit the /etc/srelay.conf file delete everything and have it looking like that:

allow local subnet to access socks proxy any

Then just start srelay using the automagick init.d script:

/etc/init.d/srelay enable /etc/init.d/srelay start

It will start on the 1080 port on your openWRT box.


Now, start a computer, activate wifi, connect to the ‘ChaosBox’ ESSID and ask for an IP via dhcp.

Start a browser and configures it to use a SOCKS 5 proxy and use the parameters used to start srelay. The proxy address is and the port is 1080.

You have to disable the option to forward the DNS queries through the proxy for srelay can’t understand them yet. Also, you have to check that your DNS resolver has been set-up by dhcp and is ‘’. If it’s not,edit your /etc/resolv.conf file and add this line on top:


Now, you have two tests to run. First the plainternet, test to load the http://telecomix.org page. If it works, go on the second test.

Try to use the darknet. If you’re connected to the Hyperboria darknet, you can test going on Nodeinfo.hype: http://[fc5d:baa5:61fc:6ffd:9554:67f0:e290:7535]/.

If it works, congratulations ūüôā


Why don’t you NAT?

Well, I tried. CJDNS address are in ipv6. So, I’ve choosed an ipv6 prefix, anounced it to be served in the wifi interface and tried to route through cjdns. However, the source IP mismatched.

And ipv6 NAT are out of the table for openWRT. So, I was unable to do it that way.

Why didn’t use Tor?

Simple, openwrt + Tor (in fact the libcrypto) are overweighted and go beyong 4 MB. So, I’ll had to use an external storage connected on the USB port. But then, the power consumption will go high. Also, I need an external devices connected, that can be separated from the router.

You spoke about mesh before?

And you didn’t see it. Yep, I need to do that. But tunneling through cjdns was such a pain. But babel works quite easily.

EDITED 08/17/2012 I changed a little bit about the srelay configuration, did not work as expected at first.

EDITED 09/13/2012 I updated the client configuration part since srelay can’t forward DNS queries. Also, we did some tests at Le Loop yesterday evening and meshing is quite advanced now, I’ll do a post to that at a later time.

EDITED 26/11/2012 The URL for the ipk has changed

How did I streamed the last JHack conference


So, yesterday, the regular Jhack crew set-up an event with Richard Stallman to talk and exchange around the issues involving Free Software and Human rights.

And, as we want to build and keep history (also, it was a week day, so some people can’t come physically to the nice place we’ve had for the occasion), we wanted to stream.

When it come to streaming something, it usually sum-ups to having a cam, connected to a laptop of a sort and which then send it over a more or less closed source application. Everything ending on the web in a flash player (website like Bambuser or Ustream are doing a great job to broadcast video from revolutions, but I cannot see the video there for I have no flash, please people, think HTML5 now, also this is why [TBS][] uses HTML5 and not a flash player).

And I do not wanted that. There might be a way to do it, without using the horrible command line tool gstreamer (I cried tears of blood last time I wanted to use it).

Also, I was surrounded by apple products (Journalists, changes your habits! I cannot works like that anymore), none of them being able to be used as I wanted to (meaning, just do something without Apple software). The last thing I add was a laptop with a small cam and an internal mic.

Tools of the trade

Since we were looking for a streaming solution in #opSyria, a part of the preliminary research had been made, so here are the tools that was needed to stream:

  • A laptop running GNU/Linux (Ubuntu, not my favorite favor, but let’s deal with it) and with included microphone and webcam.
  • VLC, because when you need to do some video/sound it is a good tool
  • Network connexion. Ethernet over RJ45 with a steady bandwidth is generally a good idea.
  • A server to stream to, with a good availability. My choice is Giss.tv, free streaming tool. It is based on icecast and can stream .ogg (free container)

 Assembly everything

Once you’ve find all of the above, the worst oart is done. If you have a powerful laptop, you can even record the stream locally, wasn’t needed here since we’ve got a camera crew working on it.

  1. Plug your computer into the network, start it and launch VLC.
  2. Visit Giss.tv and create a channel for your need. They will send you all the needed informtion for you to stream.
  3. In VLC go in File > Stream, choose your physical device (nowadays, most probably a video4linux2, the cam is ususally in /dev/video* and the sound is your ALSA card (probably :hw0.0). Click on stream
  4. Check the display locally check box, extremely useful to monitor and check everything is ok. Stream to a shoutcast server, feel in the details Giss.tv has send to you.
  5. You want to transcode to a set of codecs of choice (free one, my choice is Theora / Vorbis)
  6. Click on Go. The streaming will start. Go on your interface page on Giss.tv and say ohai to the camera, you’re on the TV \o/


I had some pain to manage the network over there (not mine, they’re not used to weird people doing strange things with network) and with the CPU power needed to transcode. My good old netbook wasn’t powerful enough.

The quality was awful, due to the fact I have nothing best than internal devices. For the next time I need even a cheap jack microphone and a webcam that I could use to zoom on the subject and have better than 2.3 Mpixels.

Also, I need to plug the power cord into a power plug that is actually connected to the electrical network. I have to set this in a bit of a rush and that totally slipped of my mind.

I also need to find a way to do it from the command line. But it works. It’s dead simple and it’s free. So now, you have no excuse.

If you want a shiny design around these, just put some CSS and HTML around, and it would be enough. But get rid of Flash.

Yubico, PAM, and Challenge/response Authentication

Introducing the yubikey

The yubikey is a small device that act as a token generator for authentication system. Yubico build them and, as they’re seen as a Universal Keyboard, they can be easily interfaced with any kind of system.

From generating OATH token, to One Time Password systems, going by Radius and OpenVPN server authentication, they can be used for a lot of funny things and, among other thing, it’s free software (not free hardware, alas). The token is at $25 and you can order them by huge quantities.

Simply put, it’s a good token for it’s price and, given my threat model (my computer being stolen) it is enough.

So, some disclaimers.

  • I have no interest in the yubico company or any of their software.
  • You can end permanently locked out of your stuff if you lose your key and if it’s the only way you have to login. But, it’s what I’m looking to achieve.
  • I am not a security expert. I haven’t notice any obvious security flaw, that does not mean there is not. However, the yubikey seems to do the job.
  • I use Archlinux, and the AUR. You’ll have to adapt things for your distro, but you’re a grown up now, it should not be a problem.
  • The challenge-response mode described here, is only available on Yubikey 2.2 and later.

What are we going to do

The first thing I wanted, was to lock my computer when the key is away. The simple thing is to launch a xlock on running X servers. It’s far from perfect, but if I can do this, I can do more.

The second thing I wanted was to be able to forbid login to people who lack either the key or my user password, a classic Two-factor authentication. But I wanted to do that offline, and without using the static key configuration of the yubikey.

But first, I need some packages, so let’s do some yaourt.

[okhin@tara.sunnydale]$ yaourt -Sy libyubikey pam_yubico ykclient ykpers

The first and second packages, are needed for pam, the last ones are needed for using your key. It seems that some tweaking may be necessary in the PKGBUILD file of pam_yubico. I have change the –with-pam-dir options of the configure invocation to be /usr/lib/security and I added _CFLAGS=-DHAVE_LIBYKPERS1 to the make invocations.

 Configuring udev

So, first thing to do for xlocking everything when removing the YubiKey is to add some udev rules. On my Arch system, they’re located into /usr/lib/udev/rules.d and it’s recommended to use a low priority one, so let’s edit the 99-yubi.rules file in this dir. I just need to rules:

ATTRS{idVendor}=="1050",ATTRS{idProduct}=="0010",GROUP=yubi,MODE="0660" SUBSYSTEM=="usb",ACTION=="remove",ENV{ID_VENDOR}=="Yubico",RUN+="/usr/local/sbin/xlock-yubi"

The first one is a classic Udev rule, and you’ll need to create a group named yubi and to add users who’ll configure the key in this group.

The second one is a bit tricky. The yubikey is detected by the system as 3 devices (on usb, one input and one hidraw), and, if you do not add the SUBSYSTEM part, you’ll have to go through 3 xlock screens before unlocking your device. It’s not that good.

The other weird part is that, when configuring or dealing with your yubikey, the tools scan for the key, and so remove the input/hidraw part of it in udev before adding them back. The subsystem that get disconnected only when you remove the key of your computer, is the usb SUBSYSTEM.

And, for the script, well, do whatever you want in it. It’s not the topic of this post, maybe later.

So, now, when you’re going to get your key out of a USB slot, it will call the script. At least, once you’ve reloaded the udev daemon:

[root@tara.sunnydale] # udevadm control --reload

There’s also a udevadm monitor command that is quite handy when debugging udev rules.

Set up the key

Ok, now, when you unplug any Yubico branded devices, you’re going to lock your screen. We’re going to move into the fun stuff now.

There’s a command for customizing your yubikey. You have to know that this key can handle two different configuration. I’ll use the second one, keeping the first one for other purposes yet to find.

So, let’s burn a new configuration for activating challenge-response:

[okhin@tara.sunnydale] $ ykpersonalize -2 -ochal-resp -ochal-hmac

It will ask you for a AES passphrase, I used one generated by the yubikey (by pushing the button), but feel free to use what you want. You won’t have to use it again, since the AES key will be stored on the yubikey and that no one will be able to read it anymore.

Next options, is to generate the pam configuration for the challenge, and we need a ~/.yubico dir for that. Protect the files inside this directory, for they contain the challenge.

[okhin@tara.sunnydale] $ mkdir ~/.yubico

And then, run this utility to configure the challenges that will be used by pam.

[okhin@tara.sunnydale] $ ykpamcfg -2 -A add_hmac_chalresp

You’ll have a file named challenge-KEYID in your ~/.yubico directory. It contains the file you need.

If, like me, you have an encrypted /home that is mounted using pam_mount at login, you cannot use this configuration. So, creates a world read-writable directory where you’ll store your challenges.

[root@tara.sunnydale] # mkdir /etc/yubico/challenges -p

And then, move your file in it, keeping a 0600 mask and the ownership correctly set-up (that is, only the user that will use this key should be able to read it). Replace the challenge part of the name by the username:

[okhin@tara.sunnydale] $ mv {~/.yubico/challenge,/etc/yubico/challenges/okhin}_KEYID

And now, we just have to play with pam.

I wanted to force users on my graphical login manager to have a key. And to enter their Unix passphrase (I use it to mount my encrypted /home) at prompt. Both conditions being required to get a login.

So, in my /etc/pam.d/slim file I’ve added this line just above the pam_unix module:

[...] auth    required    pam_yubico.so mode=challenge-response chalresp_path=/etc/yubico/challenges auth    required    pam_unix.so nullok [...]

If you want to consider that having the yubikey is the only necessary thing, then change the required by sufficient. You have to know that no password will be asked for. As soon as the yubikey is plugged into your computer, knowing your login name is enough to get access to a session, and it is a security risk.

Relaunch your session-manager and window-manager, plug your key inside your computer, and login. It will asks for your username and password, as usual. However, if you haven’t got your key plugged into your system, then you’ll be unable to login.

Congratulations, you’re done. Try to keep a way to still log into your system, in case you lose your key.

You can also have different key for one user (just add new challenges file). And you can probably have one key for different user (didn’t test that).

What’s next?

I need to change my xlock script to log me out of the box, when the key is unplugged. I need to figure a way to use the yubikey challenge-response mode with system like luks or GPG.

Also, I’d like to use to remotely connect on VPN or SSH, but I need to look into those HowTos. If some of you wanna give it a shot, you know how to reach me.

How to install a Pirate Bay proxy?

How to set-up a The Pirate Bay proxy?

Assuming you have a webserver somewhere in a (cyber|cypher)space and you want to set-up an access to the infamous website thepiratebay.(com|se|org). You need different things:

  • A webserver, in this case apache2 but can probably be nginx or any other webserver you want to use
  • An up-to-date libssl along with the Perl::SSLeay library
  • A dedicated domain name (in this case yar.okhin.fr, can be anything else as long as you own it)
  • The patched tpbCgiProxy provided by the pirate bay.
  • A little bit of time.

 First some cgi

Since it’s based on CGI Proxy you need to have the NPH support of the cgi scripts in your webserver. It’s included into Apache since the 1.3, but just check that first.

Also, check that .cgi files are interpreted as cgi-scripts not as text. Check in your mime.conf file (in debian it’s in /etc/apache2/mods-enabled/mime.conf) that the following line is uncommented:

AddHandler cgi-script .cgi

Put the nph-tpb.cgi file into your cgi-bin script directory (we will define it later in apache), just be sure that the user who runs the webserver can also exec the file. A good place to put your cgi-scripts is usually /usr/lib/cgi-bin/

It seems the tpb admins have forget one URL in their allowed ones. So, just do it now, using the editor of your choice (at line 528), you need to add thepiratebay.se to the ALLOWED_SERVERS

@ALLOWED_SERVERS= ('thepiratebay\.se$', 'thepiratebay\.org$', 'bayimg\.com$', 'suprbay\.com$', 'bayfiles\.com$') ;

Next, let’s move to apache

So, we will need a nifty file for this virtual host. I’ll paste and comment mine here

<VirtualHost *:80>         ServerAdmin webmaster@localhost         ServerName yar.okhin.fr          RewriteEngine on         RewriteCond %{HTTPS} off         RewriteRule (.*) https://yar.okhin.fr%{REQUEST_URI} </VirtualHost>

The above is just to enforce SSL connexion. In case the user is not using HTTPS everywhere. If you do not want to have virtual host on all your address, you can specify one instead of the ‘*’

<VirtualHost *:443>         ServerAdmin webmaster@localhost         ServerName yar.okhin.fr

The name of the server is important in case of a multiple virtual host. You can add server alias too, if you’d like to access the proxy with different names.

        ErrorLog ${APACHE_LOG_DIR}/error.log

I do not like logs. But error logs are, in my mind, necessary to keep the things working. You’re warned, if you crash my server, I’ll know it.

        DocumentRoot /usr/lib/cgi-bin/nph-tpb.cgi/

This is the root of your proxy. This should be the complete and absolute path to the cgi-scripts you’ve installed above.

        <Directory /usr/lib/cgi-bin/>                 AllowOverride None                 Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch                 Order allow,deny                 Allow from all         </Directory>

Add some default for permissions. As we are in the cgi-bin directory, we do not want anyone to list the content of it.

        # Possible values include: debug, info, notice, warn, error, crit,         # alert, emerg.         LogLevel crit          CustomLog /dev/null combined

You do not want to log anything related to your visitors, or it will defeat the purposes of anonymity.

        SSLEngine on         SSLCertificateFIle /etc/ssl/certs/tpb.pem         SSLCertificateKeyFIle /etc/ssl/private/tpb.key

And this is for SSL. Just be sure that the key is only readable (mode 0600) by the user who runs the server.


Why would I do that?

Well, for once, TPB is now censored in the UK. I personally think it should not happens, and if you like your freedom of speech and have access to a server, you should do it also.

Second, it’s fun to do this kind of stuff. You’ll then be able to use the cgi proxy for different purposes if needed. And, well, you really need another reason?

Legal issues

Well, as for all non-logging anonymous proxies, this will raise a lot of issue. Even if your server do not actually host any of the magnets links of the pirate bay, you could be in trouble.

Do what you want keeping this in a place of your head. Do not do it on a company owned server, or not without the former approval of your bosses.

Besides that… Have fun, proxy the planet, mirror the world, copy the tubes.

Broadcasting news

A little introduction

Everything started from an non-planing stuff done on #opsyria. To give you some context, we have a bot there, named ii, that’s help us with information management.

Birth and death of a bot

ii’s birth dates back to the second phase of opsyria, the phase were we go wild and try to get some contacts with Syrians. It was first a greetings bots, telling new comers some safety tips in Syrian (because we still do not speak Syrian).

Then, we fired up a tweeter account, and so, we add twitter functions to ii. And status.net also (for our status.net platform). And then, we added it the possibility to repeat interesting stuff ii saw on those platform (publishing on IRC the thing he saw in its following list on both platforms).

Then, we had some problem with the micro bloging thing. 140 characters is short, especially when you use arabic and weird unicode chars. So, we build a news functionality, that leads us to our news website where we still publish real time news form the ground, due to our contacts help.

After that, things went crazy. Lots of videos were posted online and we started indexing them. here came the videos functionality (and later on the pics one, same thing, but with pictures) and we started building an index of all videos related to Syrian events.

So, this is how we built on 6 month, our database of information, with dates, places and comments of each videos, pictures or news we can find. We build different websites using these and, one day, we realized that, it could be nice for preservation of the data, to extract them from the website they are located to be sure they will always be online.

We had fears that Syrian officials (or Assad’s supporters) could manage to get youtube or facebook accounts closed, and then have the videos unavailable and lost for everyone.

The archiving idea

At the 28C3, we already had a somewhat big databases. And a script that could download each video, and stores them on a website, as ‘static file’ with a non-friendly user interface (apache directory listing) located here: http://syria-videos.ceops.eu/

Some journalists just told us that it was nice, but not really usable (no way to easily parse stuff, or to find events related to one particular date, and so on). So, we started to think about how we could do that.

Parsing it by hand was out of questions, there was more than 600 videos, that is more than 4GB of files to watch, and some of them are harsh and crude to watch. Besides, we’re still unable to understand arabic in the text, so the only data we could use was the one in the flat files provided by ii.

Let’s compile html

And, at the time, I was playing a lot with ikiwiki, which is a markdown compilation to build static html page. So, I started looking at that. After all, it can generate html5, so it should be easy to add some \<video> tag inside a template, generating the pages form flat text is easy to do in bash and then, I just have to use git to push it and make the magic of ikiwiki works.

We will have pure html website, with smart URL, easily mirrorable (hey, no ?static=yes&wtf=ya&unknownparam&yetanotherfrckingstuff url, just 2012/02/11 for the 11st of February of 2012 events page), with a tagging system and full html5.

This was the concept. And since ikiwiki provides a local.css system, we could even asks gently and harass some designers to have a logo and some design around it (I can leave with pure HTML, but a lot of people do like fancy and rounded stuff…)

Enough talk, do it

So, first, installing what we need. I’m on a debian openvz squeeze kernel and I’m gonna use nginx to serve it. Ineed to add the unstable version of ffmpeg to support .ogv

aptitude install ikiwiki nginx ffmpeg

Th setup of ikiwiki is preety easy to do, I’ll paste you all the uncommented line of TelecomixBroadcastSystem.setup:

So, let’s start with some naming stuff, the name of the wiki, the mail of the admin and the username of the admin/

wikiname => 'Telecomix Broadcast System', adminemail => 'okhin@bloum.net'; adminuser => [qw{a_user_admin}],

Since there’s no user function available, this should be empty.

banned_users => [],

Where I’ll puth the markdown files

srcdir => '/var/ikiwiki/TelecomixBroadcastSystem',

Where ikiwki will put the

destdir => '/var/www/tbs',

What will be teh url of the website

url => 'http://broadcast.telecomix.org',

The plugins I wanna add. Goodstuff is a package with a lot of usefull plugins for ikiwki. The goodstuff plugins page on ikiwiki website will give you more details.

I wanted a sidebar (for hosting the navigation), a calendar (to enable the calendar generation) and a favicon (because they are nice). As I do not want the site to be editable, I deactivate the recentchanges plugin.

add_plugins => [qw{goodstuff sidebar calendar favicon}], disable_plugins => [qw{recentchanges}],

Some system directory and default that I’ve kept.

templatedir => '/usr/share/ikiwiki/templates', underlaydir => '/usr/share/ikiwiki/basewiki', indexpages => 0, discussionpage => 'Discussion', default_pageext => 'mdwn', timeformat => '%c', numbacklinks => 10, hardlink => 0, wiki_file_chars => '-[:alnum:]+/.:_', allow_symlinks_before_srcdir => 0,

HTML 5 is nice and fun to play with, we should use it more

html5 => 1,

A link for the post-update git wrapper (that is, once the repo received an update, automatically generates the new wiki)

git_wrapper => '/var/git/TelecomixBroadcastSystem.git/hooks/post-update', atom => 1,

I want a sidebar for all the pages

global_sidebars => 1,

I want to autogenerate tagpage, and to stores them in the tag/ directory.

tagbase => 'tag', tag_autocreate => 1,

There’s a lot more things to change, but you should have a look at the ikiwiki documentation.

Now, we have to create the various directory ”/var/ikiwiki/TelecomixBroadcastSystem” and ”/var/www/tbs”, making them writable and owned by the user you’re going to use to generate it, and to give ”/var/www/tbs” permission to be read by the nginx user.

And let(s setup the wiki:

ikiwiki --setup /path/to/your/Wiki.setup file

Let’s tweak some templates

So, now, I need some templates to work with the videos repo. One for video, one for pictures (to add a specific CSS class around them), and one for the ‘regular’ page, because I wanted a logo in top of all of them.

Video template

I added a ”template” directory into the wiki root (so, //var/ikiwiki/TelecomixBroadcastSystem/template) and I create the video.tmpl file.

The tempaltes of ikiwiki use the HTML::Toolkit system to create the needed templates, and the one I need were realtively simples one. OI think comments are not needed

<article class="video">     <video controls="controls" type="video/ogg" width="480" src="/videos/<TMPL_VAR file>" poster="/pics/SVGs/tbs_V1.svg"><TMPL_VAR alt></video>     <p><TMPL_VAR alt></p>     <p><a href="/videos/<TMPL_VAR file>">Direct Link to the file</a> ||     <a href="<TMPL_VAR original>">Original link</a></p> </article>

So, fixed width video, in HTML5, the files must be in a /videos/ webdir and there will be a poster displayed on the video before playing it with one nice logos. Some more links to add context, and we’re set-up.

Notice the mime format used here: video/ogg, I want to use really free web format, that will need transcoding (but that’s a later problem). The same goes for the pictrues template.

Page template

So, the page template is a huge (and complex) one, so just a patch:

--- templates/page.tmpl 2012-03-07 15:35:45.000000000 +0000 +++ /usr/share/ikiwiki/templates/page.tmpl      2011-03-28 23:46:08.000000000 +0000 @@ -30,7 +30,6 @@  </head>  <body>  -<div id="logo"><a href="/" title="Dirty Bytes of Revolutions Since 1337"><img src="/pics/PNGs/tbs_V2.png" alt="Dirty Bytes of Revolutions  Since 1337" /></a></div>  <TMPL_IF HTML5><article class="page"><TMPL_ELSE><div class="page"></TMPL_IF>   <TMPL_IF HTML5><section class="pageheader"><TMPL_ELSE><div class="pageheader"></TMPL_IF> @@ -134,7 +133,6 @@  </TMPL_UNLESS>   </div> -<div class="clearfix"></div>   <TMPL_IF HTML5><footer id="footer" class="pagefooter"><TMPL_ELSE><div id="footer" class="pagefooter"></TMPL_IF>  <TMPL_UNLESS DYNAMIC>

The clearfix div is here for the goddamn IE browser (at least, that’s why the CSS integrator guy told me). And above, there’s the pictures.

Let’s build special pages


So, the sidebar plugins, grants me the use of a sidebar.mdwn file in the root folder of the wiki.

First, some useful links (back to home, the pure text news and our webchat)

\# Quick Links \* \[Back to Home\](/index.html) \* \[News from the ground\](http://syria.telecomix.org) \* \[Webchat\](https://new.punkbob.com/chat)

What did happened this month

\# This month events

And all the page since the start of the year.

\# Events month by month


Next step is to build a nice index.mdwn page with some speech, the tag cloud and a global map of everything. I’ll skip to the interesting parts (maps and tagcloud).

Thepage list use the map directive to find all the page under 2011 and 2012 directories (one per year), that will lead to a list of all the daily pages

# Page list

This will go through all of the tag of the page, and do some computational to generate a nice cloud


I then added a favicon.ico file along with a local.css to the repository, the local.css need to be copied manually into the ”/var/www/tbs” directory. And now, the basic setup is done.


So, now use git to add all those files and commit and push them. Easy to do, that will generates some files into /var/www/tbs/.

Yeepee, now, we need to populate this.

Bashing accross videos

So, I have a list of videos soemwhere here of the form:

2011-12-04 homs/al-meedan http://www.youtube.com/watch?v=-qjNo0uqSM8 Random gunfires during the night

(And yes, sometimes, Arabic characters all over the place). So, I have, date, location (that will be used for tags), URL and some comments to add. Thanks to ii’s magic (and the huge work done for month). We already add some python scripts for downloading the video, but, for this kind of things, I wanted to use something I know: bash. It will be split in 2. One half to parse the youtube’s hell pages and to download the .webm, this part is still inpython, works well and I was too lazy to rewrite it; the second half will get the video info and add the necessary information to the wiki.

And then, I’ll need to transcode it.

So, script. Let’s start with some variable, will need them later

#!/bin/bash # We want to download everything. export VIDEOS_LINK='https://telecomix.ceops.eu/material/ii/videos.txt' export VIDEOS_RAW_DIR='/var/tbs/tbs/raw/' export VIDEOS_OGV_DIR='/var/tbs/tbs/videos/' export VIDEOS_WIKI_ROOT='/var/ikiwiki/TelecomixBroadcastSystem' export VIDEOS_LIST=${VIDEOS_WIKI_ROOT}/videos.lst export VIDEOS_NEW=${VIDEOS_WIKI_ROOT}/new_videos.lst

Let’s make some cleaning, and backup, needed to now what’s new

[[ -e ${VIDEOS_LIST}.old ]] && rm -rf ${VIDEOS_LIST}.old [[ -e $VIDEOS_LIST ]] && mv $VIDEOS_LIST ${VIDEOS_LIST}.old

Get the new version of the file list

cd $VIDEOS_WIKI_ROOT wget $VIDEOS_LINK --no-check-certificate -O $VIDEOS_LIST

Update the git repository (we probably add tags since last time, so new pages) and find the new videos part (a dirty diff, with only the added lines).

git pull 2>&1 > /dev/null diff -N $VIDEOS_LIST ${VIDEOS_LIST}.old | grep -e '^<' > $VIDEOS_NEW

Loop in all the news videos to add them to the wiki.

while read LINE do

This is a bash array if you did not know how they worked

        VIDEO=( $LINE )         DATE=${VIDEO[1]}         TTAGS=${VIDEO[2]}

Let’s split TAGS in different words separated by space not by slash

        TAGS=$(echo $TTAGS | tr '/' ' ')         LINK=${VIDEO[3]}

This is how I get the same thing than [4:] in python (from 4th fields to the end of teh array)


The date is YYYY-MM-DD in the file, I want it to be YYYY/MM/DD for creating my file in the good place (YYYY/MM/DD.mdwn), like that I have an automagick hierarchy, plus, you can get to /2012/02/14 URL quite easily.

The filename is the video link with only alphanumeric characters, will be good enough for me.

        VIDEO_PATH=$(echo ${DATE}.mdwn | tr '-' '/')         VIDEO_FILENAME=$(echo $LINK | tr -dc '[:alnum:]')

So, if the directory (which is YYYY/MM) dos not exist, let’s create it. If the file does not exist, it means this is the first time we see something for the day. We must create the page, and add some stuff (notably the date of creation must be juked, also we add a nice title). Once the file is create, git add it to the repo.

        # We have only updates which is nice, no need to check if the videos already exist         [[ ! -d $(dirname ${VIDEOS_WIKI_ROOT}/${VIDEO_PATH}) ]] && mkdir -p $(dirname ${VIDEOS_WIKI_ROOT}/${VIDEO_PATH})         if [ ! -e ${VIDEOS_WIKI_ROOT}/${VIDEO_PATH} ]                 git add ${VIDEOS_WIKI_ROOT}/${VIDEO_PATH}         fi

Add some tags to the page, along with the video template (one line, really fun), note the .ogv part added to the filename.

And now, download the file. I need to add a dot at the end of it, because the download scripts add the extension (without the .) to the file. I download it in a raw dir, where I’ll next transcode all the video into the proper format and directory.

        # And now, download it         python ${VIDEOS_WIKI_ROOT}/scripts/multiproc_videos_dl.py ${VIDEOS_RAW_DIR} "${VIDEOS_RAW_DIR}/${VIDEO_FILENAME}." "$LINK" 2>&1 > /dev/null &  done < $VIDEOS_NEW

Commit al the change at once, and push it.

# While we're at it, just publish the file git commit -a -m "VIDEO updated" 2>&1 > /dev/null git push 2>&1 > /dev/null

We’re done, just transcoding now, which is pretty easy, and done in another script. Nothing special here, looping across all the file in raw dir to transcode them into the video dir.

#!/bin/bash # Transcoding a video into ogv export ORIG='/var/tbs/tbs/raw' export DEST='/var/tbs/tbs/videos'  for RAW in $(ls -1 $ORIG) do         NAME=${RAW%.*}         echo "transcoding $NAME"         [[ -e $DEST/${NAME}.ogv ]] || ffmpeg -i $ORIG/$RAW -acodec libvorbis -ac 2 -ab 96k -b 345k -s 640x360 $DEST/${NAME}.ogv         rm $ORIG/$RAW done

Bashing across pictures

Same format as video, so same scripts, almost. Won’t detail it, just do sed VIDEO/PICTURE and you’re almost done. Also, the dl is done using wget –no-check-certificate.

Bashing the news

Same kind of things, except that I add the timstamp to it, but besides that, just the same thing.

Cronjobs everywhere

I just now need to auto-exec the 3 jobs above, the transcoding and some ikiwki-internal command to update the calendars, I’ve got 2 cronjobs for that executed every 6 hours

0 */6 * * * /var/ikiwiki/TelecomixBroadcastSystem/scripts/dl_news.bash 2>&1 > /dev/null && /var/ikiwiki/TelecomixBroadcastSystem/scripts/dl_pictures.bash 2>&1 > /dev/null && /var/ikiwiki/TelecomixBroadcastSystem/scripts/dl_video.bash 2>&1 > /dev/null && /var/tbs/transcode.sh > /dev/null 2>/dev/null 0 1/6 * * * ikiwiki-calendar /var/ikiwiki/TelecomixBroadcastSystem.setup "2011/* or 2012/*" 2012

This is the end

Now the wiki auto-build itself. I then just needed to tweak the nginx to suit my needs bt that was really easy to do. I just need to keep in mind that I’m in need of two aliases (one for /videos, one for /pictures) because I did not wanted to commit all the videos in the git directory (that eat a lot of space), and to tell it that .ogv aare indeed video files.

server {          listen   80; ## listen for ipv4         listen   [::]:80 default ipv6only=on; ## listen for ipv6          server_name  broadcast.telecomix.org;          access_log off;          location / {                 root   /var/www/tbs;                 index  index.html index.htm;         }          location /pictures {                 alias   /var/tbs/pictures;                 autoindex off;         }          location /videos {                 alias   /var/tbs/videos;                 autoindex off;         }  }

And I just need to edit the mime.types file to add those line at the end of the file:

    video/ogg                             ogm;     video/ogg                             ogv;     video/ogg                             ogg;

That’s it, everything worked fine now. A final thing was needed, to spread it easily (and that’s why I wanted static pages), ease the process of mirroring. The best way to do this is to use rsync in daemon mode with three modules read-only.

Installation of rsync is piece of cake:

aptitude install rsync

You then need to enable it in debian, for this, editing the file /etc/default/rsync is the way to go. I wanted to throttle it down and to keep it nice on the I/O (because I already have too much process that eat my cpu like, transcoding), so I’ve enabled those options in the same file:


And then, in the /etc/rsyncd.conf, I’ve added those modules

max connections = 10 log file = /dev/null timeout = 200  [tbs] comment = Telecomix Broadcast System path = /var/www/tbs read only = yes list = yes uid = nobody gid = nogroup  [videos] comment = Telecomix Broadcast System - videos path = /var/tbs/videos read only = yes list = yes uid = nobody gid = nogroup  [pictures] comment = Telecomix Broadcast System - pictures path = /var/tbs/pictures read only = yes list = yes uid = nobody gid = nogroup

ANd that’s it, people can now duplicate the whole thing on a simple web server (they just need space) without anything else on it that serving webpage.