It’s been almost two years that I’m hanging with the Telecomix crew of amazing people/jellyfish. And I think it’s the first time I’m writing about it. I’ve discussed it a lot recently, mainly because a lot of media here wanna speak with us, also because I heard of, at least, two more long term project about Hacktivists.

Also we have an interesting discussion inside the ‘core‘ team, about the whereabouts of the cluster, along with more and more interesting questions coming from people.

Hence, this post. And, well, since Telecomix is the sum of the people inside it, it is not an insight of a unique mind, but more a part of this hydra of jelly.

Follow the white rabbit

One question I have a lot is how do I ended up in Telecomix. The answer I generally do is that it just happened. I was not looking for entering such a group of people. I do not think any one with a sane mind, would voluntarily enter a group that will eat your time and nights, will put you in front of a lot of unwanted attention (and I’m not speaking about the media here), will raise the expectation that people will have about you and will confront you to tough choices (going to sleep or having people killed).

If you put it that way, no one will accept it. Besides some wannabe heroes maybe. And sociopaths (but heroes are sociopath anyway).

So, I ended up in Telecomix at the same time I decide to enter a hackerspace. I entered in this place, meeting a lot of people. The Telecomix name was already in the media (due to Hosny Moubarak shutting down the intertubes in Egypt) and I was helping with some Streisand already.

I think you do not enter inside Telecomix. It’s not a place mainly because a place would let you leave it, so you cannot neter it. You do not join it for it has no registering system (and anyone telling you there’s one might want to lure you, but that’s not the point, not now). You just evolve into something that is Telecomix. Your mindset change, and evolve into it.

So, you just wake up one day, and it’s like: ‘OMG!!!!!! I’M TELECOMIX NAO!!!!!’. Once the caffeine is getting slow into your organism, and after the morning passed, you just found that all people in there are more or less normal people.

There’s no crypto-anarchists, speaking in tongues, bashing everyone that do not use strong crypto system, and crypto social conventions; there no supra-intelligent AI that tries to take over the world; there’s no pure-hackers that feeds on data and caffeine; there’s no one that want to save the world.

Enter the Matrix

Well, that’s partly true. We do have bots that can be quite schizophrenic and sociopath some time. There’s a lot of different and unique person, from all over the cyberspace. There’s sociologist, computer scientist, slackers, hackers, beer makers, paranoiac and conspiracy theories adept, politic-minded and a-politic ones, and I suspect some aliens to participate in the cluster.

Some might wonder what’s a regular day in a hacktivist group. I don’t know, I can barely speak for mines and, well, a lot of people will be disappointed I guess. Have you seen the movie Hackers? No? You should, it’s fun. But it’s not like that.

I spend a lot of time simply sitting in front of a computer, starring at console-like screens (and yes, I do take pleasure having a computer that no one else besides me can understand or use). I do that for my work, and for my hobbies.

If you can get behind the screens, you’ll see that I’m connected on a lot of chat rooms, not saying that much quantity of things. Even when writing stuff, either for work or, like this piece of text, for my personal use, I’m on a console. Sipping some black coffee, while not noticing that it’s two in the morning, you can spend a lot of time chatting with people, while writing some software, scanning some infrastructure, or just crawling the intertubes. That’s what I do all day. My job requires it, I do enjoy it, and I’m doing it with the Telecomix crew.

This is my daily routine. Waking up too late, spending way to much time on IRC and intertubes, spending not enough time with people around, going to sleep too late. And hanging around in hackerspaces and conferences also, to make things and to exchange knowledge and skills with people in the meat space. Oh, and playing a lot of games (pen and paper RPG, video games, etc), and spending time with the media when they ask for it.

So, you see, I have a kind of regular life. I’m not crawling undercover in highly secured area to steal a computer, I’m not hacking through governement systems just to find your credit card. I’m just trying to find new way to let the data flow, because that’s what matters to me.

Meet the cluster

Asking an agent what is Telecomix will get you in an abyss of perplexity, for none of us have the same definitions. For one, we do asks this questions ourselves quite a lot, and the answer still changes and we have no consensus (but we’re not looking for it).

We agreed on the fact that we’re not an organisation, meaning we have no identified head, agenda, plan or funding. We believe we are a too much centralized acentric cluster. Why too much? Because people rely on us instead of trying to build their things. Or at least, it is the perception I have from the inside.

We can do a lot more of thing if we had 35h a day and/or a way to work for Telecomix as a full-time worker. But then, I think we’re gonna loose a lot of fun. And that’s the important part in Telecomix. The fun. We’re in here to have a lot of good time, doing things we like, things that are important (like decentralize the planet), but you can do that at this rhythm only if you have the opportunity to laugh and having fun.

This is the part where people can feel uncomfortable. We’re not changing the world because we must. Hell, who the fuck are we to think we must change the world? The only one that can do that is you. We’re changing the world because it’s fun. The most amazing things we’ve done, we’ve done it only because we’ve enjoyed doing it.

I do enjoyed working on VPN and darknets issues for Syrians. I haven’t done it because someone had to step-up, this is not my fight and this revolution belong to the Syrians. I’ve done it because I wanted to learn about it, I wanted to tests how communication networks can works under harsh conditions. When the network was attacked by Hosni Mubarak, the cluster just tested if we could work using the old lines, and how to spread it.

We just having fun with weird and unexpected situations, because if we were doing it because we thought we must do it and that no one would step-up, we will burned ourselves.

The hardest lesson

And this is hard to learn. When working with a group of people where there’s always someone connected and discussing interesting issue, while helping people through the world trying to communicate and getting arrested and probably killed for having done so, you’ll go through ugly mental states. Caffeine and stress doesn’t mix well, if you add sleep deprivation you’ll go technical.

The strength of a cluster is redundancy. Working with so different people, working on so different topics (from ham-radio, to darknets, to drones, to ACTA) grants you the possibility to just leave and disconnect.

You won’t feel comfortable, especiallay when there’s live at stakes. But you’ll be up to no good after 36h of wake, filled with caffeine and alcohol and Cameron knows what. You need a life out of the cluster, or you’ll become a bot.

The strength of this small group of hacktivist (we’re 220 connected on #telecomix at the time of writing this) are the differences of its members. We often disagree on a lot of topics, but that’s not a problem, we’re in a doocracy and if I want something to be done, I just need to do it myself. And we have a lot to learn from the ones that are different.

Living with people that shares your ideal, and all your opinion, is boring. We had some crisis, and we’ll have more of them because that’s how a chaotic and unplanned system should grow.


And we have no plan. We have no agenda. We have some back channel that exist mainly for technical purposes. Those purposes includes shouting your rage about someone, hopping that someone will get agree with you, finding that you’re alone and that you’re an asshole and a bastard and then just calm down, find the /ignore command again, and going back to normality mumbling some things about cthulluh returns or equivalent.

The thing is, I perceive Telecomix as an idea. A powerful, always changing one. Or as a virtual bar, where you’ll have free virtual drinks, served by nice-looking waiter, waitress and octopus, all being virtual. But you’ve got the point. Or not. I do not care.

I’m not sure I’ve been anywhere with that, but I think I’ve enjoyed writting it. That makes me wonder if you’ll have fun reading it. Not sure it makes sense.

Let’s git push this for the sake of it.

Broadcasting news

A little introduction

Everything started from an non-planing stuff done on #opsyria. To give you some context, we have a bot there, named ii, that’s help us with information management.

Birth and death of a bot

ii’s birth dates back to the second phase of opsyria, the phase were we go wild and try to get some contacts with Syrians. It was first a greetings bots, telling new comers some safety tips in Syrian (because we still do not speak Syrian).

Then, we fired up a tweeter account, and so, we add twitter functions to ii. And also (for our platform). And then, we added it the possibility to repeat interesting stuff ii saw on those platform (publishing on IRC the thing he saw in its following list on both platforms).

Then, we had some problem with the micro bloging thing. 140 characters is short, especially when you use arabic and weird unicode chars. So, we build a news functionality, that leads us to our news website where we still publish real time news form the ground, due to our contacts help.

After that, things went crazy. Lots of videos were posted online and we started indexing them. here came the videos functionality (and later on the pics one, same thing, but with pictures) and we started building an index of all videos related to Syrian events.

So, this is how we built on 6 month, our database of information, with dates, places and comments of each videos, pictures or news we can find. We build different websites using these and, one day, we realized that, it could be nice for preservation of the data, to extract them from the website they are located to be sure they will always be online.

We had fears that Syrian officials (or Assad’s supporters) could manage to get youtube or facebook accounts closed, and then have the videos unavailable and lost for everyone.

The archiving idea

At the 28C3, we already had a somewhat big databases. And a script that could download each video, and stores them on a website, as ‘static file’ with a non-friendly user interface (apache directory listing) located here:

Some journalists just told us that it was nice, but not really usable (no way to easily parse stuff, or to find events related to one particular date, and so on). So, we started to think about how we could do that.

Parsing it by hand was out of questions, there was more than 600 videos, that is more than 4GB of files to watch, and some of them are harsh and crude to watch. Besides, we’re still unable to understand arabic in the text, so the only data we could use was the one in the flat files provided by ii.

Let’s compile html

And, at the time, I was playing a lot with ikiwiki, which is a markdown compilation to build static html page. So, I started looking at that. After all, it can generate html5, so it should be easy to add some \<video> tag inside a template, generating the pages form flat text is easy to do in bash and then, I just have to use git to push it and make the magic of ikiwiki works.

We will have pure html website, with smart URL, easily mirrorable (hey, no ?static=yes&wtf=ya&unknownparam&yetanotherfrckingstuff url, just 2012/02/11 for the 11st of February of 2012 events page), with a tagging system and full html5.

This was the concept. And since ikiwiki provides a local.css system, we could even asks gently and harass some designers to have a logo and some design around it (I can leave with pure HTML, but a lot of people do like fancy and rounded stuff…)

Enough talk, do it

So, first, installing what we need. I’m on a debian openvz squeeze kernel and I’m gonna use nginx to serve it. Ineed to add the unstable version of ffmpeg to support .ogv

aptitude install ikiwiki nginx ffmpeg

Th setup of ikiwiki is preety easy to do, I’ll paste you all the uncommented line of TelecomixBroadcastSystem.setup:

So, let’s start with some naming stuff, the name of the wiki, the mail of the admin and the username of the admin/

wikiname => 'Telecomix Broadcast System', adminemail => ''; adminuser => [qw{a_user_admin}],

Since there’s no user function available, this should be empty.

banned_users => [],

Where I’ll puth the markdown files

srcdir => '/var/ikiwiki/TelecomixBroadcastSystem',

Where ikiwki will put the

destdir => '/var/www/tbs',

What will be teh url of the website

url => '',

The plugins I wanna add. Goodstuff is a package with a lot of usefull plugins for ikiwki. The goodstuff plugins page on ikiwiki website will give you more details.

I wanted a sidebar (for hosting the navigation), a calendar (to enable the calendar generation) and a favicon (because they are nice). As I do not want the site to be editable, I deactivate the recentchanges plugin.

add_plugins => [qw{goodstuff sidebar calendar favicon}], disable_plugins => [qw{recentchanges}],

Some system directory and default that I’ve kept.

templatedir => '/usr/share/ikiwiki/templates', underlaydir => '/usr/share/ikiwiki/basewiki', indexpages => 0, discussionpage => 'Discussion', default_pageext => 'mdwn', timeformat => '%c', numbacklinks => 10, hardlink => 0, wiki_file_chars => '-[:alnum:]+/.:_', allow_symlinks_before_srcdir => 0,

HTML 5 is nice and fun to play with, we should use it more

html5 => 1,

A link for the post-update git wrapper (that is, once the repo received an update, automatically generates the new wiki)

git_wrapper => '/var/git/TelecomixBroadcastSystem.git/hooks/post-update', atom => 1,

I want a sidebar for all the pages

global_sidebars => 1,

I want to autogenerate tagpage, and to stores them in the tag/ directory.

tagbase => 'tag', tag_autocreate => 1,

There’s a lot more things to change, but you should have a look at the ikiwiki documentation.

Now, we have to create the various directory ”/var/ikiwiki/TelecomixBroadcastSystem” and ”/var/www/tbs”, making them writable and owned by the user you’re going to use to generate it, and to give ”/var/www/tbs” permission to be read by the nginx user.

And let(s setup the wiki:

ikiwiki --setup /path/to/your/Wiki.setup file

Let’s tweak some templates

So, now, I need some templates to work with the videos repo. One for video, one for pictures (to add a specific CSS class around them), and one for the ‘regular’ page, because I wanted a logo in top of all of them.

Video template

I added a ”template” directory into the wiki root (so, //var/ikiwiki/TelecomixBroadcastSystem/template) and I create the video.tmpl file.

The tempaltes of ikiwiki use the HTML::Toolkit system to create the needed templates, and the one I need were realtively simples one. OI think comments are not needed

<article class="video">     <video controls="controls" type="video/ogg" width="480" src="/videos/<TMPL_VAR file>" poster="/pics/SVGs/tbs_V1.svg"><TMPL_VAR alt></video>     <p><TMPL_VAR alt></p>     <p><a href="/videos/<TMPL_VAR file>">Direct Link to the file</a> ||     <a href="<TMPL_VAR original>">Original link</a></p> </article>

So, fixed width video, in HTML5, the files must be in a /videos/ webdir and there will be a poster displayed on the video before playing it with one nice logos. Some more links to add context, and we’re set-up.

Notice the mime format used here: video/ogg, I want to use really free web format, that will need transcoding (but that’s a later problem). The same goes for the pictrues template.

Page template

So, the page template is a huge (and complex) one, so just a patch:

--- templates/page.tmpl 2012-03-07 15:35:45.000000000 +0000 +++ /usr/share/ikiwiki/templates/page.tmpl      2011-03-28 23:46:08.000000000 +0000 @@ -30,7 +30,6 @@  </head>  <body>  -<div id="logo"><a href="/" title="Dirty Bytes of Revolutions Since 1337"><img src="/pics/PNGs/tbs_V2.png" alt="Dirty Bytes of Revolutions  Since 1337" /></a></div>  <TMPL_IF HTML5><article class="page"><TMPL_ELSE><div class="page"></TMPL_IF>   <TMPL_IF HTML5><section class="pageheader"><TMPL_ELSE><div class="pageheader"></TMPL_IF> @@ -134,7 +133,6 @@  </TMPL_UNLESS>   </div> -<div class="clearfix"></div>   <TMPL_IF HTML5><footer id="footer" class="pagefooter"><TMPL_ELSE><div id="footer" class="pagefooter"></TMPL_IF>  <TMPL_UNLESS DYNAMIC>

The clearfix div is here for the goddamn IE browser (at least, that’s why the CSS integrator guy told me). And above, there’s the pictures.

Let’s build special pages


So, the sidebar plugins, grants me the use of a sidebar.mdwn file in the root folder of the wiki.

First, some useful links (back to home, the pure text news and our webchat)

\# Quick Links \* \[Back to Home\](/index.html) \* \[News from the ground\]( \* \[Webchat\](

What did happened this month

\# This month events

And all the page since the start of the year.

\# Events month by month


Next step is to build a nice index.mdwn page with some speech, the tag cloud and a global map of everything. I’ll skip to the interesting parts (maps and tagcloud).

Thepage list use the map directive to find all the page under 2011 and 2012 directories (one per year), that will lead to a list of all the daily pages

# Page list

This will go through all of the tag of the page, and do some computational to generate a nice cloud


I then added a favicon.ico file along with a local.css to the repository, the local.css need to be copied manually into the ”/var/www/tbs” directory. And now, the basic setup is done.


So, now use git to add all those files and commit and push them. Easy to do, that will generates some files into /var/www/tbs/.

Yeepee, now, we need to populate this.

Bashing accross videos

So, I have a list of videos soemwhere here of the form:

2011-12-04 homs/al-meedan Random gunfires during the night

(And yes, sometimes, Arabic characters all over the place). So, I have, date, location (that will be used for tags), URL and some comments to add. Thanks to ii’s magic (and the huge work done for month). We already add some python scripts for downloading the video, but, for this kind of things, I wanted to use something I know: bash. It will be split in 2. One half to parse the youtube’s hell pages and to download the .webm, this part is still inpython, works well and I was too lazy to rewrite it; the second half will get the video info and add the necessary information to the wiki.

And then, I’ll need to transcode it.

So, script. Let’s start with some variable, will need them later

#!/bin/bash # We want to download everything. export VIDEOS_LINK='' export VIDEOS_RAW_DIR='/var/tbs/tbs/raw/' export VIDEOS_OGV_DIR='/var/tbs/tbs/videos/' export VIDEOS_WIKI_ROOT='/var/ikiwiki/TelecomixBroadcastSystem' export VIDEOS_LIST=${VIDEOS_WIKI_ROOT}/videos.lst export VIDEOS_NEW=${VIDEOS_WIKI_ROOT}/new_videos.lst

Let’s make some cleaning, and backup, needed to now what’s new

[[ -e ${VIDEOS_LIST}.old ]] && rm -rf ${VIDEOS_LIST}.old [[ -e $VIDEOS_LIST ]] && mv $VIDEOS_LIST ${VIDEOS_LIST}.old

Get the new version of the file list

cd $VIDEOS_WIKI_ROOT wget $VIDEOS_LINK --no-check-certificate -O $VIDEOS_LIST

Update the git repository (we probably add tags since last time, so new pages) and find the new videos part (a dirty diff, with only the added lines).

git pull 2>&1 > /dev/null diff -N $VIDEOS_LIST ${VIDEOS_LIST}.old | grep -e '^<' > $VIDEOS_NEW

Loop in all the news videos to add them to the wiki.

while read LINE do

This is a bash array if you did not know how they worked

        VIDEO=( $LINE )         DATE=${VIDEO[1]}         TTAGS=${VIDEO[2]}

Let’s split TAGS in different words separated by space not by slash

        TAGS=$(echo $TTAGS | tr '/' ' ')         LINK=${VIDEO[3]}

This is how I get the same thing than [4:] in python (from 4th fields to the end of teh array)


The date is YYYY-MM-DD in the file, I want it to be YYYY/MM/DD for creating my file in the good place (YYYY/MM/DD.mdwn), like that I have an automagick hierarchy, plus, you can get to /2012/02/14 URL quite easily.

The filename is the video link with only alphanumeric characters, will be good enough for me.

        VIDEO_PATH=$(echo ${DATE}.mdwn | tr '-' '/')         VIDEO_FILENAME=$(echo $LINK | tr -dc '[:alnum:]')

So, if the directory (which is YYYY/MM) dos not exist, let’s create it. If the file does not exist, it means this is the first time we see something for the day. We must create the page, and add some stuff (notably the date of creation must be juked, also we add a nice title). Once the file is create, git add it to the repo.

        # We have only updates which is nice, no need to check if the videos already exist         [[ ! -d $(dirname ${VIDEOS_WIKI_ROOT}/${VIDEO_PATH}) ]] && mkdir -p $(dirname ${VIDEOS_WIKI_ROOT}/${VIDEO_PATH})         if [ ! -e ${VIDEOS_WIKI_ROOT}/${VIDEO_PATH} ]                 git add ${VIDEOS_WIKI_ROOT}/${VIDEO_PATH}         fi

Add some tags to the page, along with the video template (one line, really fun), note the .ogv part added to the filename.

And now, download the file. I need to add a dot at the end of it, because the download scripts add the extension (without the .) to the file. I download it in a raw dir, where I’ll next transcode all the video into the proper format and directory.

        # And now, download it         python ${VIDEOS_WIKI_ROOT}/scripts/ ${VIDEOS_RAW_DIR} "${VIDEOS_RAW_DIR}/${VIDEO_FILENAME}." "$LINK" 2>&1 > /dev/null &  done < $VIDEOS_NEW

Commit al the change at once, and push it.

# While we're at it, just publish the file git commit -a -m "VIDEO updated" 2>&1 > /dev/null git push 2>&1 > /dev/null

We’re done, just transcoding now, which is pretty easy, and done in another script. Nothing special here, looping across all the file in raw dir to transcode them into the video dir.

#!/bin/bash # Transcoding a video into ogv export ORIG='/var/tbs/tbs/raw' export DEST='/var/tbs/tbs/videos'  for RAW in $(ls -1 $ORIG) do         NAME=${RAW%.*}         echo "transcoding $NAME"         [[ -e $DEST/${NAME}.ogv ]] || ffmpeg -i $ORIG/$RAW -acodec libvorbis -ac 2 -ab 96k -b 345k -s 640x360 $DEST/${NAME}.ogv         rm $ORIG/$RAW done

Bashing across pictures

Same format as video, so same scripts, almost. Won’t detail it, just do sed VIDEO/PICTURE and you’re almost done. Also, the dl is done using wget –no-check-certificate.

Bashing the news

Same kind of things, except that I add the timstamp to it, but besides that, just the same thing.

Cronjobs everywhere

I just now need to auto-exec the 3 jobs above, the transcoding and some ikiwki-internal command to update the calendars, I’ve got 2 cronjobs for that executed every 6 hours

0 */6 * * * /var/ikiwiki/TelecomixBroadcastSystem/scripts/dl_news.bash 2>&1 > /dev/null && /var/ikiwiki/TelecomixBroadcastSystem/scripts/dl_pictures.bash 2>&1 > /dev/null && /var/ikiwiki/TelecomixBroadcastSystem/scripts/dl_video.bash 2>&1 > /dev/null && /var/tbs/ > /dev/null 2>/dev/null 0 1/6 * * * ikiwiki-calendar /var/ikiwiki/TelecomixBroadcastSystem.setup "2011/* or 2012/*" 2012

This is the end

Now the wiki auto-build itself. I then just needed to tweak the nginx to suit my needs bt that was really easy to do. I just need to keep in mind that I’m in need of two aliases (one for /videos, one for /pictures) because I did not wanted to commit all the videos in the git directory (that eat a lot of space), and to tell it that .ogv aare indeed video files.

server {          listen   80; ## listen for ipv4         listen   [::]:80 default ipv6only=on; ## listen for ipv6          server_name;          access_log off;          location / {                 root   /var/www/tbs;                 index  index.html index.htm;         }          location /pictures {                 alias   /var/tbs/pictures;                 autoindex off;         }          location /videos {                 alias   /var/tbs/videos;                 autoindex off;         }  }

And I just need to edit the mime.types file to add those line at the end of the file:

    video/ogg                             ogm;     video/ogg                             ogv;     video/ogg                             ogg;

That’s it, everything worked fine now. A final thing was needed, to spread it easily (and that’s why I wanted static pages), ease the process of mirroring. The best way to do this is to use rsync in daemon mode with three modules read-only.

Installation of rsync is piece of cake:

aptitude install rsync

You then need to enable it in debian, for this, editing the file /etc/default/rsync is the way to go. I wanted to throttle it down and to keep it nice on the I/O (because I already have too much process that eat my cpu like, transcoding), so I’ve enabled those options in the same file:


And then, in the /etc/rsyncd.conf, I’ve added those modules

max connections = 10 log file = /dev/null timeout = 200  [tbs] comment = Telecomix Broadcast System path = /var/www/tbs read only = yes list = yes uid = nobody gid = nogroup  [videos] comment = Telecomix Broadcast System - videos path = /var/tbs/videos read only = yes list = yes uid = nobody gid = nogroup  [pictures] comment = Telecomix Broadcast System - pictures path = /var/tbs/pictures read only = yes list = yes uid = nobody gid = nogroup

ANd that’s it, people can now duplicate the whole thing on a simple web server (they just need space) without anything else on it that serving webpage.