07 October 2015

We are hiring !

Did you knew SUSE is hiring ?

I've just looked at our counter today (October 7, 2015) and we have 68 opened positions.

Moreover, we have two positions which might interest people who are reading this blog through Planet GNOME(-FR):
Interested ? Apply !

05 October 2015

Sortie du thème d’icônes Xenlism WildFire 2.0.2

Tout juste un mois après la sortie de la version 1.0, une nouvelle version du thème d’icônes Xenlism WildFire vient de sortir, avec au programme, un certain nombre de nouvelles icônes, de nouveaux fonds d’écran, ainsi qu’un changement de licence. Autrefois sous une licence non libre, le thème est désormais publié sous licence GPL v3.

03 October 2015

Logiciels proposera-t-il les jeux Steam ?

Richard Hughes, le développeur de la logithèque du projet GNOME, vient d’écrire un patch permettant d’exposer le catalogue Steam dans Logiciels, et par ricochet, dans GNOME Shell. Pour le moment, l’installation, la désinstallation et l’achat des jeux commerciaux font appel à l’application Steam, mais Richard est en contact avec Valve pour pouvoir offrir une bien meilleure intégration.

Si jamais cela devait se concrétiser, la logithèque prendrait un sérieux coup de boost avec l’arrivée de plus de 1500 titres supplémentaires. Mais ça serait également la toute première fois qu’une logithèque, qui ne proposait autrefois que des logiciels libres, posséderait désormais plus d’applications privatives et commerciales que d’applications libres.

Counter-Strike: Global Offensive apparaissant dans Logiciels

25 September 2015

Philips Wireless, modernised

I've wanted a stand-alone radio in my office for a long time. I've been using a small portable radio, but it ate batteries quickly (probably a 4-pack of AA for a bit less of a work week's worth of listening), changing stations was cumbersome (hello FM dials) and the speaker was a bit teeny.

A couple of years back, I had a Raspberry Pi-based computer on pre-order (the Kano, highly recommended for kids, and beginners) through a crowd-funding site. So I scoured « brocantes » (imagine a mix of car boot sale and antiques fair, in France, with people emptying their attics) in search of a shell for my small computer. A whole lot of nothing until my wife came back from a week-end at a friend's with this:

Photo from Radio Historia

A Philips Octode Super 522A, from 1934, when SKUs were as superlative-laden and impenetrable as they are today.

Let's DIY

I started by removing the internal parts of the radio, without actually turning it on. When you get such old electronics, they need to be checked thoroughly before being plugged, and as I know nothing about tube radios, I preferred not to. And FM didn't exist when this came out, so not sure what I would have been able to do with it anyway.

Roomy, and dirty. The original speaker was removed, the front buttons didn't have anything holding them any more, and the nice backlit screen went away as well.

To replace the speaker, I went through quite a lot of research, looking for speakers that were embedded, rather than get a speaker in box that I would need to extricate from its container. Visaton make speakers that can be integrated into ceiling, vehicles, etc. That also allowed me to choose one that had a good enough range, and would fit into the one hole in my case.

To replace the screen, I settled on an OLED screen that I knew would work without too much work with the Raspberry Pi, a small AdaFruit SSD1306 one. Small amount of soldering that was up to my level of skills.

It worked, it worked!

Hey, soldering is easy. So because of the size of the speaker I selected, and the output power of the RPi, I needed an amp. The Velleman MK190 kit was cheap (€10), and should just be able to work with the 5V USB power supply I planned to use. Except that the schematics are really not good enough for an electronics starter. I spent a couple of afternoons verifying, checking on the Internet for alternate instructions, re-doing the solder points, to no avail.

'Sup Tiga!

So much wasted time, and got a cheap car amp with a power supply. You can probably find cheaper.

Finally, I got another Raspberry Pi, and SD card, so that the Kano, with its super wireless keyboard, could find a better home (it went to my godson, who seemed to enjoy the early game of Pong, and being a wizard).

Putting it all together

We'll need to hold everything together. I got a bit of help for somebody with a Dremel tool for the piece of wood that will hold the speaker, and another one that will stick three stove bolts out of the front, to hold the original tuning, mode and volume buttons.

A real joiner

I fast-forwarded the machine by a couple of years with a « Philips » figure-of-8 plug at the back, so machine's electrics would be well separated from the outside.

Screws into the side panel for the amp, blu-tack to hold the OLED screen for now, RPi on a few leftover bits of wood.


My first attempt at getting something that I could control on this small computer was lcdgrilo. Unfortunately, I would have had to write a Web UI for it (remember, my buttons are just stuck on, for now at least), and probably port the SSD1306 OLED screen's driver from Python, so not a good fit.

There's no proper Fedora support for Raspberry Pis, and while one can use a nearly stock Debian with a few additional firmware files on Raspberry Pis, Fedora chose not to support that slightly older SoC at all, which is obviously disappointing for somebody working on Fedora as a day job.

Looking for other radio retrofits, and there are plenty of quality ones on the Internet, and for various connected speakers backends, I found PiMusicBox. It's a Debian variant with Mopidy builtin, and a very easy to use initial setup: edit a settings file on the SD card image, boot and access the interface via a browser. Tada!

Once I had tested playback, I lowered the amp's volume to nearly zero, raised the web UI's volume to the maximum, and raised the amp's volume to the maximum bearable for the speaker. As I won't be able to access the amp's dial, we'll have this software only solution.

Wrapping up

I probably spent a longer time looking for software and hardware than actually making my connected radio, but it was an enjoyable couple of afternoons of work, and the software side isn't quite finished.

First, in terms of hardware support, I'll need to make this OLED screen work, how lazy of me. The audio setup is currently just the right speaker, as I'd like both the radios and AirPlay streams to be downmixed.

Secondly, Mopidy supports plugins to extend its sources, uses GStreamer, so would be a right fit for Grilo, making it easier for Mopidy users to extend through Lua.

Do note that the Raspberry Pi I used is a B+ model. For B models, it's recommended to use a separate DAC, because of the bad audio quality, even if the B+ isn't that much better. Testing out use the HDMI output with an HDMI to VGA+jack adapter might be a way to cut costs as well.

Possible improvements could include making the front-facing dials work (that's going to be a tough one), or adding RFID support, so I can wave items in front of it to turn it off, or play a particular radio.

In all, this radio cost me:
- 10 € for the radio case itself
- 36.50 € for the Raspberry Pi and SD card (I already had spare power supplies, and supported Wi-Fi dongle)
- 26.50 € for the OLED screen plus various cables
- 20 € for the speaker
- 18 € for the amp
- 21 € for various cables, bolts, planks of wood, etc.

I might also count the 14 € for the soldering iron, the 10 € for the Velleman amp, and about 10 € for adapters, cables, and supplies I didn't end up using.

So between 130 and 150 €, and a number of afternoons, but at the end, a very flexible piece of hardware that didn't really stretch my miniaturisation skills, and a completely unique piece of furniture.

In the future, I plan on playing with making my own 3-button keyboard, and making a remote speaker to plug in the living room's 5.1 amp with a C.H.I.P computer.

Happy hacking!

24 September 2015

GNOME 3.18 is out!

We left codenames and macaques years ago but this year at GUADEC came the idea of a small gift to the GUADEC and GNOME.Asia teams, they do an amazing work, and here we are, the GNOME 3.18 release has been named "Gothenburg" as a token of recognition for this year's GUADEC team.

GUADEC is an important moment in the life of the GNOME project, this is where we gather and assert we are foremost a community of dedicated persons, all working to produce the best computing environment we can. It is with that point in mind that I will nevertheless use this space to pinpoint a single person and congratulate him for all the work he put in 3.18.

Let me introduce him, Carlos Soriano. Round of applause please.

Files (né Nautilus) is a key part of the desktop and has a very long history but he was not intimidated and helped by the designers and other fellow developers, Carlos put a massive amount of work into it this cycle and the result is simply fantastic. Again when I was taking some screenshots for the release notes I was amazed by all the small details, like the way a "New Bookmark" entry slides when dragging an item over the sidebar.

"New bookmark" entry in Files sidebar

Voila. Carlos also did many other things. GNOME 3.18 also has many other things. This was my highlight.

Thanks to everyone involved.

23 September 2015

GNOME 3.18, here we go

As I'm known to do, a focus on the little things I worked on during the just released GNOME 3.18 development cycle.

Hardware support

The accelerometer support in GNOME now uses iio-sensor-proxy. This daemon also now supports ambient light sensors, which Richard used to implement the automatic brightness adjustment, and compasses, which are used in GeoClue and gnome-maps.

In kernel-land, I've fixed the detection of some Bosch accelerometers, added support for another Kyonix one, as used in some tablets.

I've also added quirks for out-of-the-box touchscreen support on some cheaper tablets using the goodix driver, and started reviewing a number of patches for that same touchscreen.

With Larry Finger, of Realtek kernel drivers fame, we've carried on cleaning up the Realtek 8723BS driver used in the majority of Windows-compatible tablets, in the Endless computer, and even in the $9 C.H.I.P. Linux computer.

Bluetooth UI changes

The Bluetooth panel now has better « empty states », explaining how to get Bluetooth working again when a hardware killswitch is used, or it's been turned off by hand. We've also made receiving files through OBEX Push easier, and builtin to the Bluetooth panel, so that you won't forget to turn it off when done, and won't have trouble finding it, as is the case for settings that aren't used often.


GNOME Videos has seen some work, mostly in the stabilisation, and bug fixing department, most of those fixes were also landed in the 3.16 version.

We've also been laying the groundwork in grilo for writing ever less code in C for plugin sources. Grilo Lua plugins can now use gnome-online-accounts to access keys for specific accounts, which we've used to re-implement the Pocket videos plugin, as well as the Last.fm cover art plugin.

All those changes should allow implementing OwnCloud support in gnome-music in GNOME 3.20.

My favourite GNOME 3.18 features

You can call them features, or bug fixes, but the overall improvements in the Wayland and touchpad/touchscreen support are pretty exciting. Do try it out when you get a GNOME 3.18 installation, and file bugs, it's coming soon!

Talking of bug fixes, this one means that I don't need to put in my password by hand when I want to access work related resources. Connect to the VPN, and I'm authenticated to Kerberos.

I've also got a particular attachment to the GeoClue GPS support through phones. This allows us to have more accurate geolocation support than any desktop environments around.

A few for later

The LibreOfficeKit support that will be coming to gnome-documents will help us get support for EPubs in gnome-books, as it will make it easier to plug in previewers other than the Evince widget.

Victor Toso has also been working through my Grilo bugs to allow us to implement a preview page when opening videos. Work has already started on that, so fingers crossed for GNOME 3.20!

10 September 2015

Ubuntu Make 15.09.2 enables you to install Android SDK only.

I'm proud to announce this new Ubuntu Make release, with excellent new feature and fixes from our community.

First, welcome Sebastian Schubert to the Ubuntu Make contributor family. He did some awesome work on implementing Android SDK only support (for those not wanting to install the whole Android Studio bundle) in Ubuntu Make! As usual, this is backed up with large and medium tests to cover us, great enhancement! :)

The new option to install android sdk is:

$ umake android android-sdk

You will get into your user PATH (after next login) the expected android platform tools.

Secondly, Omer Sheikh, who already implemented language selection in firefox developer edition, came back with some heavy duty of rationalizing every exit codes accross Ubuntu Make, to ensure we always exit with the expected error code in every situation. Not only he implemented this, but also he did grow our testsuite to ensure that any bad download page are properly detected! Awesome work.

Smaller fixes sneaked in as well and you can get the full release content details here. As usual, you can get this latest version direcly through its ppa for the 14.04 LTS, 15.05 and wily ubuntu releases.

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

01 September 2015

Ubuntu Make 15.09 featuring experimental Unity 3D editor support

Last thurday, the Unity 3D team announced providing some experimental build of Unity editor to Linux.

This was quite an exciting news, especially for me as a personal Unity 3D user. Perfect opportunity to implements this install support in Ubuntu Make, and this is now available for download! The "experimental" comes from the fact that it's experimental upstream as well, there is only one version out (and so, no download section when we'll always fetch latest) and no checksum support. We talked about it on upstream's IRC channel and will work with them on this in the future.

Unity3D editor on Ubuntu!

Of course, all things is, as usual, backed up with tests to ensure we spot any issue.

Speaking of tests, this release as well fix Arduino download support which broke due to upstream versioning scheme changes. This is where our heavy tests investment really shines as we could spot it before getting any bug reports on this!

Various more technical "under the wood" changes went in as well, to make contributors' life easier. We got recently even more excellent contributions (it's starting to be hard for me to keep up with them to be honest due to the load!), more on that next week with nice incoming goodies which are cooking up.

The whole release details are available here. As usual, you can get this latest version direcly through its ppa for the 14.04 LTS, 15.05 and wily ubuntu releases.

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

14 August 2015


And here comes my own tale of GUADEC 2015, and first is the city, Göteborg, it has it all takes: a big river, parks and stairs (two of my favourite city elements), huge brick buildings, and trams for public transport (and they even have stairs within some trams)...

Bricks and stairs and park

Bricks and stairs and park, Göteborg, August 9th 2015

First evening was Thursday, and while Patrick was waiting at the hostel and distributing card keys to attendants, for hours (thanks!), the rest of the GUADEC troop was already at Järntorget, enjoying beers and people we do not see for the rest of the year.

Then came the core days, many interesting sessions, of course there was the trademark keynote by Pamela Chestek, but my U+1F493 BEATING HEART goes to Behdad talk, "Fonts without Borders", he showed big achievements in script rendering, with enormous test suites, but it will be remembered for the U+1F42F TIGER FACE part, colour emoticons, usable in filenames, correctly displayed in menu items, all demonstrated live.

And that was it, so short, gone the core days, but then came the BOFs, with a change of venue, and an even better transportation system than the public trams. Like many others I really enjoyed taking the boat every morning, and evening, to get to the IT University.

The BOFs and hackfest days were great, I did various things but most importantly I got to fix the integration of the "getting started" documentation pages in help.gnome.org (I think I get to fix this one every time we have documentation hackfests, but this time it is for good), I also got to play with OpenShift to create module activity reports (but it lacks diskspace, so it cannot process all modules), and to offer a form server to GSOC admins (after discussions with Lasse and Alexandre Franke).

Builder, builder everywhere

Builder, builder everywhere, Göteborg, August 9th 2015

Bonus: a keynote we didn't have. Maybe you drank the excellent beers from the Ocean micro-brewery. It is not displayed on their website but they have also helped produce the "We Can Do It" beer.. It's a beer produced by FemAle. And here came way too late the keynote idea, about them talking about their "diversity" experience in the beer-enthusiasts community, and what they did. (you can read the story at the Guardian, Revolution brewing as Sweden’s first beer made by women goes on sale.)

We Can Do It beer bottle

Göteborg, August 9th 2015

Thanks again to the GNOME foundation for supporting travels and accomodations for lots of persons (including me).

09 August 2015

Juniper vSRX on Proxmox VE

Juniper provides a JUNOS, based on the one used by the SRX series, than can be used in a virtual machine. That product is great for Juniper users that want to play with their favorite network OS and also for people who would like to discover the JUNOS world.

Juniper is providing images for VMware and KVM based hypervisors. As Proxmox VE user you know that it uses KVM to get things done. So, having Firefly Perimeter working on Proxmox VE should be doable without much troubles. But here are the steps to get things working.

Downloading vSRX (Firefly Perimeter)

To setup vSRX on Proxmox VE we need to download the JVA file provided by Juniper. This file is an archive containing the KVM VM definition and the QCOW2 disk of the VM.

Preparing the VM

We then need to create a VM with the following characteristics (see also the end of this article):

  • OS: Other OS types (other)
  • CD/DVD: Do not use any media
  • Hard Disk: VIRTIO0 or IDE0, size of 2 GB, QCOW2 format
  • CPU: at least 2 sockets and 1 core, type KVM64 (default on latest versions of Proxmox VE)
  • Memory: 1024 MB are recommended (but 2048 MB should be better)
  • Network: maximum of 10 interfaces, use VIRTIO or Intel E1000 as model for interfaces

Using the vSRX Disk

Now that the VM definition has been created, we need to use the disk provided in the JVA file. For that we first need to extract it.

# bash junos-vsrx-12.1X47-D10.4-domestic.jva -x

The disk will be available in the directory that has been created. We justneed to copy the disk to replace the one used by the VM (replace VMID by the ID of your VM).

# cp junos-vsrx-12.1X47-D10.4-domestic.img /var/lib/vz/images/VMID/vm-VMID-1.qcow2

With this, the VM is now bootable and JUNOS will load properly, we will not be able to use it though. For that we need to find a way to send the serial output to the Proxmox VE's noVNC console.

Getting the serial output in the Proxmox VE console

First we need to find where our VM definition is stored. Usually it is under /etc/pve/nodes/NODENAME/qemu-server/VMID.conf (replace NODENAME and VMID with your owns). But we can use a command like the following:

# find / -name 'VMID.conf'

Then we can edit the VM definition file:

# vim /etc/pve/nodes/NODENAME/qemu-server/VMID.conf

And we have to add the following line in the configuration:

args: -serial tcp:localhost:6000,server,nowait

And eventually, we need to change the VM display to use Cirrus Logic GD 5446 (cirrus) via the Proxmox VE web interface or just by adding vga: cirrus in the VM definition.

The End

We can now just start the VM, the output will be displayed in the Proxmox VE's console. Enjoy using JUNOS with virtual machines.


Edit (2015-06-17):

After some tests I was glad to see that both disk and network interfaces can use the VIRTIO drivers. I would recommend to use this type of drivers since it is supposed to improve the scheduling on the hypervisor level.

02 June 2015

Tuning systemd services

Recently my tor relay started crashing daily. I found out it was because the usage increased (approaching 10MB/s) and every night when logrotate asked it to reload, it failed with:

[May 30 04:02:01.000 [notice] Received reload signal (hup). Reloading config and resetting internal state.
May 30 04:02:01.000 [warn] Could not open "/etc/tor/torrc": Too many open files
May 30 04:02:01.000 [warn] Unable to open configuration file "/etc/tor/torrc".
May 30 04:02:01.000 [err] Reading config failed--see warnings above. For usage, try -h.
May 30 04:02:01.000 [warn] Restart failed (config error?). Exiting.
May 30 04:02:01.000 [warn] Couldn't open "/var/lib/tor/state.tmp" (/var/lib/tor/state) for writing: Too many open files

The problems comes from LimitNOFILE=4096 in the service file, and I had no idea how to fix it cleanly.

fcrozat gave me the answer which I'll summarize as:

mkdir /etc/systemd/system/tor.service.d/
echo [Service]\nLimitNOFILE=16384" > /etc/systemd/system/tor.service.d/limit.conf
systemctl daemon-reload
service tor restart

25 May 2015

SUSE Ruling the Stack in Vancouver

Rule the Stack

Last week during the the OpenStack Summit in Vancouver, Intel organized a Rule the Stack contest. That's the third one, after Atlanta a year ago and Paris six months ago. In case you missed earlier episodes, SUSE won the two previous contests with Dirk being pretty fast in Atlanta and Adam completing the HA challenge so we could keep the crown. So of course, we had to try again!

For this contest, the rules came with a list of penalties and bonuses which made it easier for people to participate. And indeed, there were quite a number of participants with the schedule for booking slots being nearly full. While deploying Kilo was a goal, you could go with older releases getting a 10 minutes penalty per release (so +10 minutes for Juno, +20 minutes for Icehouse, and so on). In a similar way, the organizers wanted to see some upgrade and encouraged that with a bonus that could significantly impact the results (-40 minutes) — nobody tried that, though.

And guess what? SUSE kept the crown again. But we also went ahead with a new challenge: outperforming everyone else not just once, but twice, with two totally different methods.

For the super-fast approach, Dirk built again an appliance that has everything pre-installed and that configures the software on boot. This is actually not too difficult thanks to the amazing Kiwi tool and all the knowledge we have accumulated through the years at SUSE about building appliances, and also the small scripts we use for the CI of our OpenStack packages. Still, it required some work to adapt the setup to the contest and also to make sure that our Kilo packages (that were brand new and without much testing) were fully working. The clock result was 9 minutes and 6 seconds, resulting in a negative time of minus 10 minutes and 54 seconds (yes, the text in the picture is wrong) after the bonuses. Pretty impressive.

But we also wanted to show that our product would fare well, so Adam and I started looking at this. We knew it couldn't be faster than the way Dirk picked, and from the start, we targetted the second position. For this approach, there was not much to do since this was similar to what he did in Paris, and there was work to update our SUSE OpenStack Cloud Admin appliance recently. Our first attempt failed miserably due to a nasty bug (which was actually caused by some unicode character in the ID of the USB stick we were using to install the OS... we fixed that bug later in the night). The second attempt went smoother and was actually much faster than we had anticipated: SUSE OpenStack Cloud deployed everything in 23 minutes and 17 seconds, which resulted in a final time of 10 minutes and 17 seconds after bonuses/penalties. And this was with a 10 minutes penalty due to the use of Juno (as well as a couple of minutes lost debugging some setup issue that was just mispreparation on our side). A key contributor to this result is our use of Crowbar, which we've kept improving over time, and that really makes it easy and fast to deploy OpenStack.

Wall-clock time for SUSE OpenStack Cloud

Wall-clock time for SUSE OpenStack Cloud

These two results wouldn't have been possible without the help of Tom and Ralf, but also without the whole SUSE OpenStack Cloud team that works on a daily basis on our product to improve it and to adapt it to the needs of our customers. We really have an awesome team (and btw, we're hiring)!

For reference, three other contestants succeeded in deploying OpenStack, with the fastest of them ending at 58 minutes after bonuses/penalties. And as I mentioned earlier, there were even more contestants (including some who are not vendors of an OpenStack distribution), which is really good to see. I hope we'll see even more in Tokyo!

Results of the Rule the Stack contest

Results of the Rule the Stack contest

Also thanks to Intel for organizing this; I'm sure every contestant had fun and there was quite a good mood in the area reserved for the contest.

Update: See also the summary of the contest from the organizers.

12 May 2015

Deploying Docker for OpenStack with Crowbar

A couple of months ago, I was meeting colleagues of mine working on Docker and discussing about how much effort it would be to add support for it to SUSE OpenStack Cloud. It's been something that had been requested for a long time by quite a number of people and we never really had time to look into it. To find out how difficult it would be, I started looking at it on the evening; the README confirmed it shouldn't be too hard. But of course, we use Crowbar as our deployment framework, and the manual way of setting it up is not really something we'd want to recommend. Now would it be "not too hard" or just "easy"? There was only way to know that... And guess what happened next?

It took a couple of hours (and two patches) to get this working, including the time for packaging the missing dependencies and for testing. That's one of the nice things we benefit from using Crowbar: adding new features like this is relatively straight-forward, and so we can enable people to deploy a full cloud with all of these nice small features, without requiring them to learn about all the technologies and how to deploy them. Of course this was just a first pass (using the Juno code, btw).

Fast-forward a bit, and we decided to integrate this work. Since it was not a simple proof of concept anymore, we went ahead with some more serious testing. This resulted in us backporting patches for the Juno branch, but also making Nova behave a bit better since it wasn't aware of Docker as an hypervisor. This last point is a major problem if people want to use Docker as well as KVM, Xen, VMware or Hyper-V — the multi-hypervisor support is something that really matters to us, and this issue was actually the first one that got reported to us ;-) To validate all our work, we of course asked tempest to help us and the results are pretty good (we still have some failures, but they're related to missing features like volume support).

All in all, the integration went really smoothly :-)

Oh, I forgot to mention: there's also a docker plugin for heat. It's now available with our heat packages now in the Build Service as openstack-heat-plugin-heat_docker (Kilo, Juno); I haven't played with it yet, but this post should be a good start for anyone who's curious about this plugin.

30 April 2015

5 Humanitarian FOSS projects

Over on opensource.com, I just posted an article on 5 humanitarian FOSS projects to watch, another instalment in the humanitarian FOSS series the site is running. The article covers worthy projects Literacy Bridge, Sahana, HOT, HRDAG and FrontlineSMS.

A few months ago, we profiled open source projects working to make the world a better place. In this new installment, we present some more humanitarian open source projects to inspire you.

Read more

15 April 2015

Be IP is hiring!

In case some readers of this blog would be interested in working with Open Source software and VoIP technologies, Be IP (http://www.beip.be) is hiring a developer. Please see http://www.beip.be/BeIP-Job-Offer.pdf for the job description. You can contact me directly.

14 April 2015

Hackweek 12: improving GNOME password management, day 1

This week is Hackweek 12 at SUSE

My hackweek project is improving GNOME password management, by investigating password manager integration in GNOME.

Currently, I'm using LastPass which is a cloud-based password management system.

It has a lot of very nice features, such as:
  • 2 factor authentication
  • Firefox and Chrome integration
  • Linux support
  • JS web client with no install required, when logging from a unknown system (I never needed it myself)
  • Android integration (including automatic password selection for applications)
  • cli open-source client (lastpass-cli), allowing to extract account specific information
  • encrypted data (nothing is stored unencrypted server side)
  • strong-password generator
  • support encrypted notes (not only password)
  • server based (clients sync) with offline operations supported
However, it also has several drawbacks:
  • closed-source
  • subscription based (required for Android support)
  • can't be hosted on my own server
  • doesn't integrate at all with GNOME desktop
I don't want to reinvent the wheel (unless it is really needed), which is why I spend my first day at searching the various password managers available on Linux and compare their features (and test them a bit).

So far, I found the following programs:
  • KeePass (GPL):
    • version 1.x written in Java, still supported, not actively developed
    • version 2.x written in C# (Windows oriented), works with Mono under Linux
    • UI feels really Windows-like
    • DB format change between v1 and v2
    • supports encrypted notes
    • password generator
    • supports plugins ( a lot are available)
    • support OTP (keeotp plugin, provide 2factor auth through TOTP, HTOP built-in)
    • shared db editing
    • support yubikey (static or hotp)
    • 2 Firefox extension available(keefox, passifox)
    • 3 android applications available (one application KeePass2Android supports alternative keyboard, KeepShare supports alternative keyboard + a11y framework to fill android application forms, like LastPass)
    • Chrome extension available
    • JS application available
    • CLI available
    • big ecosystem of plugins and other applications able to process file format

  • KeePassX (GPL)
    • Qt4 "port" of KeePass (feels more a Linux application than KeePass)
    • alpha version for DB v2 support
    • missing support for OTP
    • missing support for keypasshttp (required by firefox extensions to discuss with main application), support is being done in a separate branch by a contributor, not merged
    • release are very scarse (latest release is April 2014, despite commits on git, very few people are contributing, according to git)
    • libsecret dbus support is being started by a contributor

  • Mitro:
    • company developped it was bought by Twitter last year, project released under GPL, no development since January.

  • Password Safe (Artistic license):
    • initially written by Bruce Schneier 
    • beta version available on Linux
    • written in wxWidgets 3.0 / C++
    • yubikey supported
    • android application available, no keyboard nor a11y framework usage, only use copy/paste (but allows sync of db with owncloud and other cloud platforms)
    • CLI available
    • 3 different DB formats (pre-2.0, 2.0, 3.0)
    • password history
    • no firefox extension and the "auto-type" built-in function is all but intuitive
    • support merge of various db

  • Encrypt:
    • same 0 knowledge framework as SpiderOak
    • node-js based

  • Pass:
    • simple script on top of text files / gnupg and optionnally git (used for history and can also be used to manage hosting the file)
    • not easy learning curve (CLI mostly), need gnupg to be setup before usage
    • one file per password entry, should make 
    • very basic Qt GUI available
    • basic FF extension available
    • basic android application available
Unfortunately, none of those applications properly integrates (yet) in GNOME (master password prompt, locking keyring when desktop is locked, etc..).

I've also looked at gnome-keyring integration with the various browsers:
  • Several extensions already exist, one is fully written in Javascript and is working nicely (port to libsecret is being investigated)
  • Chrome has already gnome-keyring and libsecret integration
  • Epiphany already works nicely with gnome-keyring
  • No password generator is available in Firefox / Chrome / Epiphany (nor GTK+ on a more generic basis)
Unfortunately, each browser is storing metadata in gnome-keyring for password entries in a slightly different format (fields name), causing password entries duplication and not allowing sharing keyring data across browsers.

Conclusions for this first day of hackweek:
  • Keepass file format seems to be the format of choice for password manager (a lot of applications written around it)
  • Password manager which would fit my requirement is KeePass but is written in Mono (I don't want Mono stack to come back on my desktops) and too Windows oriented, so not an option.
  • KeePassX seems to be the way to go (even if it is Qt based) but it lacks a lot of features and I'm not sure if it worth spending effort in adding those missing features there.
  • Pass is extremely simple (which would make hacking around it pretty straightforward) but requires a lot of work around it (android, GUI) to make it nicely integrated in GNOME.
I haven't yet made up my mind which solution is the best, but I'll probably spend the following days hacking around KeePassX (or a new program to wrap KP db into libsecret) and Firefox gnome-keyring integration.

Comments welcome, of course.

08 April 2015

GNU Cauldron 2015

This year the GNU Cauldron Conference is going to be held in Prague, Czech Republic, from August 7 to 9, 2015.

The GNU Cauldron Conference is a gathering of users and hackers of the GNU toolchain ecosystem.

Meaning that if you are interested in projects remotely related to the GNU C library, GNU Compiler Collection, the GNU Debugger or any toolchain runtime related project that has ties with the GNU system you are welcome!

If you are a Free Software project that is using the GNU Toolchain, would like your voice to be heard, hang out with other users and hackers of that space you are even more than welcome! If yo have crazy ideas you'd like to discuss over a nice beverage of your choice, please join!

You just have to send a nice note to tools-cauldron-admin@googlegroups.com saying that you are coming, and that would act as a registration. The number of seats is limited, so please do not drag your feet too much :-)

And if you want present a talk, well, there is a call for paper under way. You just have to sent your abstract to tools-cauldron-admin@googlegroups.com. The exact call for paper can be read here.

So see you there, gals'n guys!

01 February 2015


Deuxième et dernier jour du FOSDEM au stand GNOME.

La journée d'hier a été assez remplie, toujours beaucoup de présentations en parallèle, dont quelques unes intéressantes pour mon nouveau travail chez Anevia, comme celles sur Postgres, SPDX, et la gestion de licences open source en général.

Ici au stand GNOME, on voit comme chaque année des têtes connues, les membres locaux Belges comme fredp, cassidy, staz, les membres de la Fondation GNOME qui sont venus de l'autre bout du monde pour assister aux conférences ou aider au stand, et puis des tête françaises familières comme nicalsilva de Mozilla qui nous avait gentiment ouvert les portes des locaux parisiens de Mozilla pour le traducthon de l'an dernier.

Des nouvelles têtes aussi, dont un membre de l'Épitech qui aimerait organiser des présentations autour de GNOME aux étudiants. Nous avions déjà eu une demande de ce type (par Polytech Tours si mes souvenirs sont exacts), mais qui n'avait pas abouti. Les étudiants de l'Épitech utilisent dans le cadre de leur formation une openSUSE sous GNOME et sont donc déjà familiers avec notre environnement. Il y aurait donc potentiellement une oreille attentive et qui sait, de nouveaux contributeurs ? Espérons que ce contact débouchera sur quelque chose.Nous avions déjà évoqué dans GNOME-FR le fait d'avoir des présentations partagées pour justement le cadre éducatif.

De mon côté, j'aimerais faire avancer la traduction de la documentation développeur, qui est juste en très mauvais état pour l'instant. Une documentation en français, c'est important surtout pour les étudiants qui n'ont pas forcément encore une bonne maîtrise de l'anglais technique mais souhaiteraient apprendre. Si je n'arrive pas à m'en sortir tout seul, je pense qu'un nouveau Traducthon dédié à la documentation développeur pourrait être une solution.

Cette année, pas de T-shirts à vendre, ceux ci sont restés en Suède pour une autre conférence. Mais il nous avons quelques posters, sacs (épuisés), et nos stickers argentés, les bénéfices de ces derniers étant pour GNOME-FR. Nous en sommes à un peu plus de 200 stickers vendus ce week end.

(photo réalisée grâce à ma webcam moisie :-p)

N'hésitez pas à venir nous voir sur notre stand, jusqu'à ce soir, ou bien aux prochains événements de l'année, sans doute Solution Linux les 19 et 20 mai à Paris-La Défense !

29 January 2015

Samsung 840 EVO Performance fix

Several weeks ago Samsung has released a fix for the 840 series of their SSDs that had performance issues on long time stored data. While the fix procedure is quite simple to apply on Windows, when you use your SSD on a GNU/Linux powered system it can be quite tricky. So to fix your SSD you will need a bootable USB key with the Samsung binaries. Moreover the Samsung documentation is not really well written and can lead to confusion. So here are the steps to fix dear GNU/Linux users' SSDs.

Some preps

Firstly, prepare a USB key (at least 512 MB, just to be sure) and download FreeDOS.

Creating the bootable USB key

Once FreeDOS is on your computer, plug the USB key in and find the device to interact with it. You can generally find the device using the dmesg command. This will output something like this:

[1017607.068095] usb 2-1: new high-speed USB device number 110 using ehci-pci
[1017607.278127] usb 2-1: New USB device found, idVendor=1b1c, idProduct=1ab1
[1017607.278135] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[1017607.278140] usb 2-1: Product: Voyager
[1017607.278145] usb 2-1: Manufacturer: Corsair
[1017607.278150] usb 2-1: SerialNumber: AA00000000000634
[1017607.278936] usb-storage 2-1:1.0: USB Mass Storage device detected
[1017607.279084] scsi12 : usb-storage 2-1:1.0
[1017608.389828] scsi 12:0:0:0: Direct-Access Corsair Voyager 1100 PQ: 0 ANSI: 0 CCS
[1017608.390448] sd 12:0:0:0: Attached scsi generic sg2 type 0
[1017608.391272] sd 12:0:0:0: [sdb] 15663104 512-byte logical blocks: (8.01 GB/7.46 GiB)
[1017608.392259] sd 12:0:0:0: [sdb] Write Protect is off
[1017608.392266] sd 12:0:0:0: [sdb] Mode Sense: 43 00 00 00
[1017608.394784] sd 12:0:0:0: [sdb] No Caching mode page found
[1017608.394792] sd 12:0:0:0: [sdb] Assuming drive cache: write through
[1017608.402247]  sdb: sdb1
[1017608.405637] sd 12:0:0:0: [sdb] Attached SCSI removable disk

In this case you want to use the /dev/sdb dive as seen in the log.
Now you can just write the FreeDOS image on the USB disk. The image is compressed so you'll need to decompress it before.

$ bunzip2 FreeDOS-1.1-memstick-2-256M.img.bz2
$ dd if=FreeDOS-1.1-memstick-2-256MB.img of=/dev/sdb bs=512k

Copying the Samsung binaries

Download the Samsung binaries.

Mount the USB key and unzip those binaries at the USB key root. In this way you will be able to use them from FreeDOS later.

# mount /dev/sdb1 /mnt
# unzip Samsung_Performance_Restoration_USB_Bootable.zip
# mv 840Perf/* /mnt
# umount /mnt
# eject /dev/sdb

The fix

Plug the USB key in your machine and reboot the host. Do what is necessary to boot on the USB key. Choose the option 4 of FreeDOS "Load FreeDOS without driver".

Once FreeDOS is running just run the PERF.EXE file and the Samsung tool will start. Enter the index in front of the SSD you want to upgrade and fix. The utility will take care of everything (firmware upgrade and fix). Note that the pass to fix the SSD can take some time.

Once the tool has finished to fix your SSD just reboot the host by typing reboot in FreeDOS. Do not forget to unplug the USB key to avoid booting from it later.

Enjoy your brand new fixed SSD!

25 January 2015

Ekiga 5 – Progress Report

Current Status Ekiga 5 has progressed a lot lately. OpenHUB is reportin a High Activity for the project. The main reason behind this is that I am again dedicating much of my spare time to the project. Unfortunately, we are again facing a lack of contributions. Most probably (among others) because the project has been […]

18 December 2014

Kernel hacking workshop

As part of our "community" program at Collabora, I've had the chance to attend to a workshop on kernel hacking at UrLab (the ULB hackerspace). I never touched any part of the kernel and always saw it as a scary thing for hardcore hackers wearing huge beards, so this was a great opportunity to demystify the beast.

We learned about how to create and build modules, interact with userspace using the /sys pseudo-filesystem and some simple tasks with the kernel internal library (memory management, linked lists, etc). The second part was about the old school /dev system and how to implement a character device.

I also discovered lxr.free-electrons.com/ which is a great tool for browsing and find your way through a huge code base. It's definitely the kind of tool I'd consider re-using for other projects.

So a very a cool experience, I'm still not seeing submit kernel patches any time soon but may consider trying to implement a simple driver or something if I ever need to. Thanks a lot to UrLab for hosting the event, to Collabora for letting me attend and of course to Hastake who did a great job explaining all this and providing fun exercises (I had to reboot only 3 times! But yeah, next time I'll use a VM :) )

kernel-workshop.jpg Club Mate, kernel hacking and bulgur

12 November 2014

GNOME Trademark and Groupon

If you regularly read the minutes of the GNOME Foundation you noticed since few months there is a dispute about the use of the GNOME trademark by Groupon. The company has released a point of sale tablet under the name GNOME despite the trademark being registered by the GNOME Foundation. So long story short, Groupon […]

The post GNOME Trademark and Groupon appeared first on Nothing Fancy.

17 September 2014

What’s in a job title?

Over on Google+, Aaron Seigo in his inimitable way launched a discussion about  people who call themselves community managers.. In his words: “the “community manager” role that is increasingly common in the free software world is a fraud and a farce”. As you would expect when casting aspertions on people whose job is to talk to people in public, the post generated a great, and mostly constructive, discussion in the comments – I encourage you to go over there and read some of the highlights, including comments from Richard Esplin, my colleague Jan Wildeboer, Mark Shuttleworth, Michael Hall, Lenz Grimmer and other community luminaries. Well worth the read.

My humble observation here is that the community manager title is useful, but does not affect the person’s relationships with other community members.

First: what about alternative titles? Community liaison, evangelist, gardener, concierge, “cat herder”, ombudsman, Chief Community Officer, community engagement… all have been used as job titles to describe what is essentially the same role. And while I like the metaphors used for some of the titles like the gardener, I don’t think we do ourselves a service by using them. By using some terrible made-up titles, we deprive ourselves of the opportunity to let people know what we can do.

Job titles serve a number of roles in the industry: communicating your authority on a subject to people who have not worked with you (for example, in a panel or a job interview), and letting people know what you did in your job in short-hand. Now, tell me, does a “community ombudsman” rank higher than a “chief cat-herder”? Should I trust the opinion of a “Chief Community Officer” more than a “community gardener”? I can’t tell.

For better or worse, “Community manager” is widely used, and more or less understood. A community manager is someone who tries to keep existing community members happy and engaged, and grows the community by recruiting new members. The second order consequences of that can be varied: we can make our community happy by having better products, so some community managers focus a lot on technology (roadmaps, bug tracking, QA, documentation). Or you can make them happier by better communicating technology which is there – so other community managers concentrate on communication, blogging, Twitter, putting a public face on the development process. You can grow your community by recruiting new users and developers through promotion and outreach, or through business development.

While the role of a community manager is pretty well understood, it is a broad enough title to cover evangelist, product manager, marketing director, developer, release engineer and more.

Second: The job title will not matter inside your community. People in your company will give respect and authority according to who your boss is, perhaps, but people in the community will very quickly pigeon-hole you – are you doing good work and removing roadblocks, or are you a corporate mouthpiece, there to explain why unpopular decisions over which you had no control are actually good for the community? Sometimes you need to be both, but whatever you are predominantly, your community will see through it and categorize you appropriately.

What matters to me is that I am working with and in a community, working toward a vision I believe in, and enabling that community to be a nice place to work in where great things happen. Once I’m checking all those boxes, I really don’t care what my job title is, and I don’t think fellow community members and colleagues do either. My vision of community managers is that they are people who make the lives of community members (regardless of employers) a little better every day, often in ways that are invisible, and as long as you’re doing that, I don’t care what’s on your business card.


15 August 2014

GNOME.Asia Summit 2014

Everyone has been blogging about GUADEC, but I’d like to talk about my other favorite conference of the year, which is GNOME.Asia. This year, it was in Beijing, a mightily interesting place. Giant megapolis, with grandiose architecture, but at the same time, surprisingly easy to navigate with its efficient metro system and affordable taxis. But the air quality is as bad as they say, at least during the incredibly hot summer days where we visited.

The conference itself was great, this year, co-hosted with FUDCon’s asian edition, it was interesting to see a crowd that’s really different from those who attend GUADEC. Many more people involved in evangelising, deploying and using GNOME as opposed to just developing it, so it allows me to get a different perspective.

On a related note, I was happy to see a healthy delegation from Asia at GUADEC this year!

Sponsored by the GNOME Foundation

31 March 2014

Some introduction seems to be necessary

It appears my blog is currently is reaching some places like planet.gnome.org and planet.fedoraproject.org so I think some introduction may be necessary. My name is Baptiste Mille-Mathias, I’m French, I’m living in south of France, near Cannes with my partner Célia and my son Joshua and my daughter Soline. During work days I’m System/Application Administrator […]

The post Some introduction seems to be necessary appeared first on Nothing Fancy.

30 March 2014

Pitivi et MediaGoblin, même combat !

C'est avec pas mal de retard, que je vous signale (ou rappelle) que Pitivi, le logiciel de montage vidéo a lancé un appel aux dons par l'intermédiaire de la Fondation GNOME. Le but: aider à sortir la version 1.0 en récoltant assez de fonds pour dédier des gens à temps plein sur cette tâche, et s'il y a des sous en rab, financer des évolutions, pour lesquelles vous pouvez voter. Vous trouverez plus d'informations sur ce billet d'antistress.

D'un autre côté, MediaGoblin est un logiciel permettant d'héberger son propre équivalent de Youtube, ses photos, etc., et partager cela avec ses amis. Plus de problèmes de censure: vous hébergez le contenu vous même, et c'est ce modèle décentralisé qui a fait la force de web. MediaGoblin fait aussi un appel au don. Je ne peux que vous encourager à donner pour encourager ces projets :)

16 February 2014


I saw a link to OpenLibernet and after reading there FAQ I believed there was a fundamental problem. I quickly read the full paper but found no answer.

I guess I have missed something, please explain me :)

A peer address is the hash of a cryptographic public key. It is used to encrypt certain packets as part of the routing protocol, serve as a payment address for the payment system (similar to a Bitcoin’s wallet address), but also serves as a unique identifier for a node, similar to IP Addresses in the current internet.

Also, a node may simply generate a new Peer Address anytime it chooses to.

When the balance of a neighbor hits a certain threshold, a payment request is initiated.

Malicious nodes could however cheat their neighbors and refuse to pay them their due traffic. For that, the protocol is designed to punish such malicious behavior through ostracism. A node will be automatically isolated from the network until it pays all its dues and resolves all conflicts with its neighbors.

Turkish Cat

What is preventing some malicious node to re-join the network with a new peer address when it is getting close to receiving a payment request, and discard its balance?

The only limitation I see is First, and to eliminate the churn caused by unstable nodes, a Layer 2 link becomes active only after it has been alive for a set amount of time. but this is not a problem is you start another client in parallel when getting close to a payment threshold and switch to the new peer address when it is ready.

08 November 2013

Building a single GNOME module using jhbuild

I often see new contributors (and even seasoned hackers) wanting to hack on a GNOME module, say Empathy, trying to build it using:

jhbuild build empathy

This is obviously correct but asks jhbuild to build not only Empathy but also all its dependencies (62 modules) so you'll end up building most of the GNOME stack. While building a full GNOME stack may sometimes be useful that's generally not needed and definitely not the easiest and fastest way to proceed.

Here is what I usually do.

First I make sure to have installed all the dependencies of the module using your distribution packaging system. This is done on Fedora using:

sudo yum-builddep empathy

or on Ubuntu/Debian:

sudo apt-get build-dep empathy

If you are using a recent distribution, there are good chances that most of these dependencies are still recent enough to build the module you want to hack on. Of course, as you are trying to build the master branch of the project some dependencies may have been bumped to one of the latest developement releases. But first let's try to build just Empathy.

jhbuild buildone empathy

There are good chances that some dependencies are missing or are too old, you'll then see this kind of error message:

No package 'libsecret-1' found
Requested 'telepathy-glib >= 0.19.9' but version of Telepathy-GLib is 0.18.2

That means you'll have to build these two libraries in your jhbuild as well. Just check the list of depencies of the module to find the exact name of the module:

jhbuild list empathy | grep secret
jhbuild list empathy | grep telepathy

In this example you'll see you have to build the libsecret and telepathy-glib modules:

jhbuild buildone libsecret telepathy-glib

Of course these modules may have some extra depencies on their own so you may have to do some iteration of this process before being able to actually build the module you care about. But, from my experience, if you are using a recent distribution (like the latest Fedora release) the whole process will still be much faster than building the full stack. Furthermore, it will save you to have to deal with build errors from potentially 62 modules.

03 August 2013

GNOME.Asia 2013

This June, I was in Seoul, Korea for the GNOME.Asia Summit, the yearly occasion to meet up with the Asian side of the GNOME community. As always, it was an awesome conference, with so many cool people. I learned about new projects like Seafile and got to meet new friends and catch up with old ones.

I’d also to thank my employer, Collabora, for sponsoring my flight and the GNOME foundation for paying the hotel.

Sponsored by Collabora                              Sponsored by the GNOME Foundation

25 March 2013

SPICE on OSX, take 2

A while back, I made a Vinagre build for OSX. However, reproducing this build needed lots of manual tweaking, the build was not working on newer OSX versions, and in the mean time, the recommended SPICE client became remote-viewer. In short, this work was obsolete.

I've recently looked again at this, but this time with the goal of documenting the build process, and making the build as easy as possible to reproduce. This is once again based off gtk-osx, with an additional moduleset containing the SPICE modules, and a script to download/install most of what is needed. I've also switched to building remote-viewer instead of vinagre

This time, I've documented all of this work, but all you should have to do to build remote-viewer for OSX is to run a script, copy a configuration file to the right place, and then run a usual jhbuild build. Read the documentation for more detailed information about how to do an OSX build.

I've uploaded a binary built using these instructions, but it's lacking some features (USB redirection comes to mind), and it's slow, etc, etc, so .... patches welcome! ;) Feel free to contact me if you are interested in making OSX builds and need help getting started, have build issues, ...

11 December 2012

FOSDEM 2013 Crossdesktop devroom Call for talks

The Call for talks for the Crossdesktop devroom at FOSDEM 2013 is getting to its end this Friday. Don't wait and submit your talk proposal about your favourite part of GNOME now!

Proposals should be sent to the crossdesktop devroom mailing list (you don't have to subscribe).

29 November 2012

29 Nov 2012

Wandering in embedded land: part 2, Arduino turned remote control

Now with the Midea AC remote control being mostly deciphered,
the next step is to emulate the remote, with an arduino since it's
the system I use for that embedded greenhouse control. While waiting
for my mail ordered IR LED (I didn't want to solder off one from my
existing AC controllers), I started doing a bit of code and looking
at the integration problems.

The hardware side

One of the challenge is that the Arduino system is already heavy
packed, basically I use all the digital Input/Output except 5 (and 0 and
1 which are hooked to the serial support), and 2 of the 6 analog inputs,
as the card already drives 2 SHT1x temp/humidity sensors, 2 light sensors,
an home made 8 way relay board, and a small LCD display, there isn't much
room left physically or in memory for more wires or code ! Fortunately
driving a LED requires minimal resources, the schematic is trivial:

I actually used a 220 Ohms resistance since I didn't had a 100 Ohms one,
the only effect is how far the signal may be received, really not a problem
in my case. Also I initially hooked it on pin 5 which shouldn't had been
a problem, and that's the free slot I have available on the Arduino

The software side

My thinking was: well I just need to recreate the same set of light
patterns to emulate the remote control and that's done, sounds fairly simple
and I started coding royines which would switch the led on or off for
1T, 3T and 4T durations. Thus the core of the code was like:

void emit_midea_start(void) {
void emit_midea_end(void) {
void emit_midea_byte(byte b) {
int i;
byte cur = b;

for (i = 0;i < 8;i++) {
if (cur & 1)
cur >>= 1;
cur = ~b;
for (i = 0;i < 8;i++) {
if (cur & 1)
cur >>= 1;

where ir_up() and ir_down() were respectively activating or deactivating
the pin 5 set as OUTPUT for the given duration defined as macros.

Playing with 2 arduinos simultaneously

Of course to test my code the simplest was to set up the new module on
another arduino positioned in front of the Arduino with the IR receptor
and running the same code as used for decoding the protocol.

The nice thing is that you can hook up the arduinos on 2 different USB
cables connected to the same machine, they will report as ttyUSB0 and ttyUSB1
and once you have looked at the serial output you can find which is which.
The only cumbersome part is having to select the serial port to the other one
when you want to switch box either to monitor the output or to upload a new
ersion of the code, so far things are rather easy.

Except it just didn't worked !!!

Not the arduino, I actually replaced the IR LED by a normal one from
time to time to verify it was firing for a fraction of a second when
emitting the sequence, no the problem was that the IR receiver was detecting
transitions but none of the expected duration, or order, nothing I could
really consider a mapping of what my code was sending. So I tweaked
the emitting code over and over rewriting the timing routines in 3
different ways, trying to disable interrupts, etc... Nothing worked!

Clearly there was something I hadn't understood ... and I started
searching on google and reading, first about timing issues on the Arduino
but things ought to be correct there, and then on existing remote control
code for Arduino and others. Then I hit
Ken Shirriff's blog on his IR library for the Arduino and realized
that the IR LED and the IR Receiver don't operate at the same level. The
LED really can just be switched on or off, but the IR Receiver is calibrated
for a given frequency (38 KHz in this case) and will not report if it
gets the IR light, but report if it gets the 38 KHz pulse carried by
the IR light. In a nutshell the IR receiver was decoding my analogic 0's
but didn't for the 1's because it was failing to catch a 38 KHz pulse,
I was switching the IR led permanently on and that was not recognized as
a 1 and generating erroneous transitions.

Emitting the 38KHz pulse

Ken Shirriff has another great article titled
Secrets of Arduino PWM explaining the details used to generate a pulse automatically
on *selected* Arduino digital output ans explains the details used to set this
up. This is rather complex and nicely encapsulated in his infrared library
code, but I would suggest to have a look if you're starting advanced
developments on the Arduino.

The simplest is then to use Ken's
IRremote library
by first installing it into the installed arduino environment:

  • create a new directory /usr/share/arduino/libraries/IRremote (as root)
  • copy IRremote.cpp IRremote.hIRremote.h IRremoteInt.h there

and then use it in the midea_ir.ino program:

#include <IRremote.h>

IRsend irsend;

int IRpin = 3;

This includes the library in the resulting program, define an IRsend
object that we will use to drive the IR led. One thing to note is that
by default the IRremote library drives only the digital pin 3, you can
modify it to change to a couple of other pins, but it is not possible to
drive the PWM for digital pin 5 which is the one not used currently on
my greenhouse Arduino.

Then the idea is to just replace the ir_down() and ir_up() in the code
with the equivalent low level entry points driving the LED in the IRsend
object, first by using irsend.enableIROut(38) to enable the pulse at
38 KHz on the default pin (Digital 3) and then use irsend.mark(usec)
for the equivalent ir_down() and irsend.space(usec) for the ir_up():

void emit_midea_start(void) {

void emit_midea_end(void) {
void emit_midea_byte(byte b) {
int i;
byte cur = b;
byte mask = 0x80;

for (i = 0;i < 8;i++) {
if (cur & mask)
mask >> 1;


Checking with a normal led allowed to spot a brief light when emitting
the frame so it was basically looking okay...

And this worked, placing the emitting arduino in front of the receiving
the IRanalyzer started to decode the frames, as with the real remote control,
things were looking good again !

But failed the real test ... when put in from of the AC the hardware didn't
react, some improvement is still needed.

Check your timings, theory vs. practice

I suspected some timing issue, not with the 38KHz pulse as the code from
Ken was working fine for an array of devices, but rather how my code was
emitting, another precious hint was found in the blog about the library:

IR sensors typically cause the mark to be measured as longer than expected and the space to be shorter than expected. The code extends marks by 100us to account for this (the value MARK_EXCESS). You may need to tweak the expected values or tolerances in this case.

remember that the receptor does some logic on the input to detect the
pulse at 38 KHz, that means that while a logic 0 can be detected relatively
quickly, it will take at least a few beats before the sync to the pulse is
recognized and the receiver switch its output to a logic 1. In a nutshell
a 1 T low duration takes less time to recognize than an 1 T high duration.
I was also afraid that the overall time to send a full frame would drift
over the fixed limit needed to transmit it.

So I tweaked the emitting code to count the actual overall duration of
the frames, and aslo added to the receiver decoding code the display of the
duration of the first 10 durations between transitions. I then reran the
receiver looking at the same input from the real remote control and the
arduino emulation, and found that in average:

  • the emulated 8T down was 200us too long
  • the emulated 8T up was 100us too short
  • the emulated 1T down at the beginning of a bit was 100us too long
  • the emulated 1T up at the end of logical 0 was 80us too short
  • the emulated 3T up at the end of logical 1 was 50us too short

After tweaking the duration accordingly in the emitter code, I got
my first successful emulated command to the AC, properly switching it off,

I then finished the code to provide the weird temperature conversions
front end routines and then glue that as a test application looping over
a minute:

  • switching to cooling to 23C for 15s
  • switching to heating to 26C for 15s
  • switching the AC off for 30s

The midea_ir_v1.ino code
is available for download, analysis and reuse. I would suggest to not
let this run for long in fron of an AC as the very frequent change of mode
may not be good for the hardware (nor for the electricity bill !).

is available for download

Generating the 38KHz pulse in software

While the PWM generation has a number of advantages, especially w.r.t.
regularity of the pattern and no risk of drift due for example to delays
handling interrupts, in my case it has the serious drawback of forcing use
of a given pin (3 by default, or 9 if switching to a different timer in the
IRremote code), and those are not available, unless getting the soldering
iron and changing some of the existing routing in my add-on board. So
the next step is to also implement the 38KHz pulse in software. First
this should only affect the up phase, the down phase consist of no emission
and hence implemented by a simple:

void send_space(int us) {
digitalWrite(IRpin, LOW);

The up part should divised into HIGH for most of the duration, followed
by a small LOW indicating the pulse. 38 KHz means a 26.316 microseconds
period. Since the delayMicroseconds() of the arduino indicates it can be
reliable only for more than 3 microseconds, it seems reasonable to use
a 22us HIGH/ 4us LOW split, and expect the remaining computation to fill
the sub-microsecond of the period, that ought to be accurate enough. One
of the point of the code below is to try to avoid excessive drift in two

  • by doing the accounting over the total lenght for the up period,
    not trying to just stack 21 periods
  • by running a busy loop when the delay left is minimal rather than
    call delayMicroseconds() for a too small amount (not sure it's effective
    micros() value seems periodically updated by an timer interrupt handler
    doesn't look like the chip provides a fine grained counter).

The resulting code doesn't look very nice:

void send_mark(int us) {
unsigned long e, t = micros();
e = t + us;
while (t < e) {
digitalWrite(IRpin, HIGH);
if (t - e < 4) {
while ((t = micros()) < e);
digitalWrite(IRpin, LOW);
if (t - e < 22) {
digitalWrite(IRpin, LOW);
digitalWrite(IRpin, LOW);
t = micros();
if (t - e < 4) {
while ((t = micros()) < e);
t = micros();

But to my surprize once I replaced all the irsend.mark() and irsend.space()
by equivalent calls to send_mark() and send_space(), the IRanalyzer running on
the second arduino properly understood the sequence, proving that the IR
receiver properly picked the signal, yay !

Of course that didn't worked out the first time on the real hardware,
after a bit of analysis of the resulting timings exposed by IRanalyzer
I noticed the mark at the beginning of bits were all nearly 100us too long,
I switched the generation from 450us to 350us, and bingo, that worked with the
real aircon !

the midea_ir_v2.ino resulting module
is very specific code, but it is tiny, less than 200
lines, and he hardware side is also really minimal, a single resistor and
the IR led.


The code is now plugged and working, but v2 just could not work in the
real environment with all the other sensors and communication going on.
I suspect that the amount of foreign interrupts are breaking the 38KHz
pulse generation, switching back to the PWM generated pulse using the
IRremote library works in a very reliable way. So I had to unsolder pin 3
and reaffect it to the IR led, but that was a small price to pay in
comparison of trying to debug the timing issues in situ !

The next step in the embedded work will be to replace the aging NSLU2
driving the arduino by a shiny new Raspberry Pi !

This entry will be kept at http://veillard.com/embedded/midea.html.

26 November 2012

26 Nov 2012

Wandering in embedded land: part 1, Midea 美的 aircon protocol

I have been a user of Arduino's for a few years now, I use them to
control my greenhouse (I grow orchids). This mean collecting data
for various parameters (temperature, hygrometry, light) and actionning
a collection of devices in reaction (fan, misting pump, fogging machine,
a heater). The control part is actually done by an NSLU2 which also
collects the data, export them as graph on the internet and allows me
to manually jump in and take action if needed even if I'm far away
using an ssh connection.

This setup has been working well for me for a few years but since our
move to China I have had an airon installed in the greenhouse like in
other parts of the home. And that's where I have a problem, this AC of
brand Midea (very common home appliance brand in China) can only be
controlled though a remote control. And until now that meant I had no
way to automtate heating or cooling, which is perfectly unreasonnable :-)

After some googling the most useful reference I found about those
is the Tom's
Site page on building a remote adapter
for those. It explained most
parts of the protocol but not all of them, basically he stopped at the
core of the interface but didn't went into details, for example didn't
explained the commands encoding. The 3 things I really need are:

  • Start cooling to a given temperature
  • Start heating to a given temperature
  • Stop the AC

I don't really need full fan speed control, low speed is quite sufficient
for the greenhouse.

Restarting the Arduino development

I hadn't touched the Arduino development environment for the last few
years, and I remember it being a bit painful to set up at the time. With
Fedora 17, things have changed, a simple

yum install arduino

and launching the arduino tool worked the first time, actually it asked me
the permission to tweak groups to allow me as the current user to talk
through the USB serial line to the Arduino. Once done, and logging in again
everything worked perfectly, congratulation to the packagers, well done !
The only sowtware annoyance is that is often take a dozen seconds between the
time an arduino is connnected or powered and when it appears in the
ttyUSB? serial ports options in the UI, but that's probably not arduino's

The arduino environment didn't really change in all those years,
the two notable exception is the very long list of different boards supoorted
now, and the fact that arduino code files are renamed from .pde to .ino !

Learning about the data emitted

The first thing needed was to double check the result from Tom with
our own hardware, then learn about the protocol to be able to construct
the commands above. To do this I hooked a IR receptor to the Arduino on
digital pin 3, the graphic below show the logic, it's very simple:

Then I loaded a modified (for IRpin 3) version of Walter Anderson's
onto the Arduino and started firing the aircon remote control at the
receiver and looked at the result: total garbage ! Whatever the key
pressed the output had no structure and actually looked as random as
input without any key being pressed :-\

It took me a couple of hours of tweaking to find out that the metal
enclosure of the receiver had to be grounded too, the GRD pin wasn't
connected, and not doing so led to random result !

Once that fixed, the data read by the Arduino started to make some
sense and it was looking like the protocol was indeed the same as the
one described in Tom's site.

The key to the understanding of how the remote work is that it
encodes a digital input (3 Bytes for Midea AC protocol) as a set of
0 and 1 patterns each of them being defined by a 0 analogic duration
followed by a short anlogic pulse at 38KHz to encode 0, or a long
analogic pulse at 38KHz to encode 1:

Each T delay correspond to 21 pulses on a 38KHz signal, this is
then a variable lenght encoding

As I was making progresses on the recognition of the patterns sent
by the aircon I was modifying the program to give a more synthetic view
of the resulting received frames, you can use my own
it is exended to allow recording of a variable number of transition,
detects the start transition as a 3-4 ms up, and the end as a 3-4 ms
down from the emitter, then show the transmitted data as bit field and
hexadecimal bytes:

Bit stream detected: 102 transitions
D U 1011 0010 0100 1101 1001 1111 0110 0000 1011 0000 0100 1111 dUD Bit stream end : B2 4D 9F 60 B0 4F !
4484 4324 608 1572 604 472 596 1580 600 1580 !

So basically what we find here:

  • the frame start markers 4T down, 4T up
  • 6 Bytes of payload, this is actually 3 bytes of data but after
    each byte is sent its complement is sent too
  • the end of the frame consist of 1T down, 4T up and then 4T down

there is a few interesting things to note about this encoding:

  • It is redundant allowing to detect errors or stray data coming from
    other 38KHz remotes (which are really common !)
  • All frames are actually sent a second time just after the first one,
    so the amount of redundancy is around 4 to 1 in the end !
  • by reemitting inverted values, the anmount of 0 and 1 sent is the
    same, as a result a frame always have a constant duration even if the
    encoding uses variable lenght
  • A double frame duration is around : 2 * (8 + 8 + 3*2*8 + 3*4*8 + 1 + 8) * 21 / 38000 ~= 186 ms

The protocol decoding

Once the frame is being decoded properly, we are down to analyzing
only 3 bytes of input per command. So I started pressing the buttons
in various ways and record the emitted sequences:

Cool 24 fan level 3
1011 0010 0100 1101 0011 1111 1100 0000 0100 0000 1011 1111 B2 4D 3F C0 40 BF
Cool 24 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0100 0000 1011 1111 B2 4D 9F 60 40 BF
Cool 20 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0010 0000 1101 1111 B2 4D 9F 60 20 DF
Cool 19 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0011 0000 1100 1111 B2 4D 9F 60 30 CF
Heat 18 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0001 1100 1110 0011 B2 4D 9F 60 1C E3
Heat 17 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0000 1100 1111 0011 B2 4D 9F 60 0C F3
Heat 29 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 1010 1100 0101 0011 B2 4D 9F 60 AC 53
Heat 30 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 1011 1100 0100 0011 B2 4D 9F 60 BC 43
Stop Heat 30 fan level 1
1011 0010 0100 1101 0111 1011 1000 0100 1110 0000 0001 1111 B2 4D 7B 84 E0 1F
Cool 28 fan 1
1011 0010 0100 1101 1001 1111 0110 0000 1000 0000 0111 1111 B2 4D 9F 60 80 7F
Stop Cool 28 fan 1
1011 0010 0100 1101 0111 1011 1000 0100 1110 0000 0001 1111 B2 4D 7B 84 E0 1F

The immediately obvious information is that the first byte is the constant
0xB2 as noted by Tom's Site. Another thing one can guess is that the command
drom the control is (in general) absolute, not relative to the current state
of the AC, so commands are idempotent,if it failed to catch one key, it will
get a correct state if this is repeated, this just makes sense from an UI
point of view ! After a bit of analysis and further testing the
code for the 3 bytes seems to be:

[1011 0010] [ffff 1111] [ttttcccc]

Where tttt == temperature in Celcius encoded as following:

17: 0000, 18: 0001, 19: 0011, 20: 0010, 21: 0110,
22: 0111, 23: 0101, 24: 0100, 25: 1100, 26: 1101,
27: 1001, 28: 1000, 29: 1010, 30: 1011, off: 1110

I fail to see any logic in the encoding there, I dunno what the Midea
guys were thinking when picking those values. What sucks is that the protocol
seems to have a hardcoded range 17-30, while basically for the orchids
I try to keep in the range 15-35, i.e. I will have to play with the
sensors output to do the detection. Moreover my test is that even when
asked to keep warm at 17, the AC will continue to heat until well above
19C, I can't trust it to be accurate, best is to keep the control and logic
on our side !

cccc == command, 0000 to cool, 1100 to heat, 1000 for automatic selection
and 1101 for the mode to remove moisture

Lastly ffff seems to be the fan control, 1001 for low speed, 0101 for
medium speed, 0011 for high speed, 1011 automatic, and 1110 for off. There
is also a mode which is about minimizing energy, useful at night, where
the fan is quite slower than even the low speed, but i didn't yet understood
how that actually work.

There is still 4 bytes left undeciphered, they could be related to 2
function that I don't use: a timer and oscilation of the air flow, I
didn't try to dig, especially with a remote control and documentation
in Chinese !

Last but not least: the stop command is 0xB2 0x7B 0xE0, it's the same
whatever the current state might be.

At this point I was relatively confident I would be able to control the
AC from an Arduino, using a relatively simple IR LED control, it ought to
be a "simple matter of programming", right ?

Well, that will be the topic of the next part ;-) !

This entry will be kept at http://veillard.com/embedded/midea.html.

19 February 2012

Tip for python-mode with Emacs

If you expect 'Alt + d' wil only remove the first part 'foo_' of 'foo_bar' with the great python-mode, you can make this change to python-mode.el:

- (modify-syntax-entry ?\_ "w" py-mode-syntax-table)
+ (modify-syntax-entry ?\_ "_" py-mode-syntax-table)

Thank you Ivan.

Update with python-mode v6.0.4, add this line to python-mode-syntax-table (line 153):

(modify-syntax-entry ?\_ "_" table)

27 January 2012

27 Jan 2012


My last FOSDEM participation was in 2004, and I always keep in mind many good moments with my French and Belgian GNOME's Friends !


So I'm totally excited to meet them again in 2012 ... :)

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

15 November 2011

Outreach in GNOME

The GNOME Montréal Summit was held a month ago now, and not only was it lots of fun, but also a very productive time. Marina held a session about the outreach in GNOME, and we spent time discussing different ways to improve welcoming and attracting people in GNOME. Let me share some of the points we raised, supplemented by my own personnal opinions, that do not reflect those of my employer, when I’ll have a job.

A warm welcome

There has been a lot of nice work done structuring and cleaning up GNOME Love page. We now have a list of people newcomers can contact if they are interested in a particular project. Feel free to add your name and project to the list, the more entry points we get, the better for them!

I tend to think there is still a bit too much content on the GNOME Love page, maybe we could use more pretty diagrams (platform overview, ways to get involved) to keep the excitement growing and to reduce the amount of text we have right now (GUI tutorials, books about GNOME,  tips & tricks). Feedback appreciated!

Start small

We tend to think of contributions as patches and a certain amount of code added in a project. Howeverit’s not easy at all for newcomers to just pop in and work on a patch, especially in GNOME where most software follows strict rules (as in coding style, GObject API style, etc.). And since GNOME maintains (again, for the most part) a very high quality in its code backed by many hackers, whether they’re part of a company or independent contributors, it makes the landing of a patch even tougher.

Which is why we should encourage everyone who wants to get involved to work on small tasks, would it be fixing a string typo, rewording or marking plural forms for translation. Working on manageable changes ensures that the patches are completed and landing these patches builds confidence to work on bigger patches. Having your name in the commit log is a great reward, that encourages sticking around and digging for more.

Advertise early, advertise often

If we want to get loads of people coming toward GNOME, we should definitely talk more and spread the word about the GNOME Outreach Program for Women (GOPW) and the Google Summer of Code (GSoC) earlier.

Google doesn’t announce the program very far in advance and approved organizations are only published three weeks before the application deadline, but we should encourage students to get involved in GNOME early and keep an eye out for such announcements. Having a list of mentors who can help newcomers anytime throughout the year and having that list included on the Google Summer of Code wiki page of organizations that provide year-round informal mentorship should help attract students to GNOME.

On our side, we could definitely gather ideas and promote the programs earlier. Don’t have exact dates in head, but our KDE fellows promote the Summer of Code early March if not before. Not only will that help better spreading the word, but students might get involved earlier, and get to know the tools/community before the actual program.

Communication is key to success

We have to get better at communicating with interns, and make sure they get the help and feedback they need. We have different channels of communication in GNOME, mainly IRC and mailing lists. Both are a bit intimidating to the newcomer (I still proceed with extreme care when I use them), so it would be good to have a short tutorial about the main mailing lists around, how to connect on IRC and what to expect out of it.

Always two there are, no more, no less

In order to increase the chances of success for the interns, we need good mentors. Most people underestimate what it takes to be a good mentor: being nice, supportive, competent, enthusiastic. You have to remember you’re helping someone to land in the big GNOME land without too much hassle, so consider it carefully. I encourage you to read this very informative blog post if you’re thinking about mentoring a student.

The Summer of Code administrators at GNOME could perhaps keep an eye on mentors as well as students, not with weekly reports but just by poking them from time to time and making sure everything is going well.

Show me the way

To help students set up their workflow, it would be great to have full-length screen-casts demonstrating how to fix a bug in GNOME, starting on the Bugzilla page and finishing on the same page when attaching the final patch. This means going through cloning the module with Git, using grep to find the faulty line, editing the code, using Git to look at the diff and format the final patch. All this in one video would really help connect the parts and suggest a way to work for students.

GNOME Love bug drive

Please consider attaching the gnome-love keyword when you file or triage a bug that is easy to fix. A selection of current GNOME Love bugs is essential to help newcomers figure out how they can start contributing.

Good GNOME Love bugs are trivial or straight forward bugs that everyone agrees on, e.g. paper cut bugs or corner cases. It’s helpful to specify the file or files that will need to be modified and any reference code that does something similar in the bug. Even most trivial bugs are suitable candidates, because in the end, fixing a GNOME Love bug is as much about learning the process, as about the fix itself!

Get involved

If you want to help us gather more people around GNOME and help them find their spot in our community, make sure to suscribe to the outreach-list mailing list.

Thanks for reading!

And thanks to Marina and Karen for reviewing this post!

09 October 2011

Feedback on GNOME 3.0

After 5 months with GNOME 3.0, I'm really happy with the experience. At the end of work day,
my mind is no more exhausted of windows placement fighting and application finding.

GNOME 3.0 is really stable, except with the Open Source driver on my Radeon 5870 (4 crashes in 2 months).

I really like the behavior of dual-head where the secondary screen has only one virtual screen.
For me, there are just 3 annoying points:

  • Ctrl + Del to remove a file in Nautilus, may be it's a Fedora settings but this change is just @!# I've already a Trash to undo my mistakes (http://www.khattam.info/howto-enable-delete-key-in-nautilus-3-fedora-15-2011-06-01.html)
  • Alt key to shutdown, no I don't want to waste energy for days and my PC boots quickly.
  • only vertical virtual screens, I found a bit painful to move down two screens when the screen is reachable with one move with a 2x2 layout but I understand this layout doesn't fit well with the GNOME 3 design.

To have a good experience with GNOME 3, I use:

  • Windows key + type to launch everything
  • Ctrl + Shift + Alt + arrows to move the application between the virtual screen
  • Ctrl + click in the launcher when I really want a new instance (the default behavior is perfect)
  • snap à la Windows 7 is great
  • Alt + Tab then the arrow keys to select an app

Don't forget to read https://live.gnome.org/GnomeShell/CheatSheet or the Help (System key + 'Help').

It's not specific to GNOME 3 but you can change the volume when your mouse is over the applet (don't click, think hover) and a mouse scroll.
With GTK+, do you know you can reach the end of scrolled area with a left click on the arrow and a specific position by middle click?

I'm impressed by the new features of GNOME 3.2 and I'm waiting for Fedora 17 to enjoy it!

23 August 2011

GNU Hackers Meeting 2011 in Paris

In case you are in the Paris area and don't know already, there is a a

GNU Hackers Meeting event being held from Thursday 25th to Sunday 28th

August, 2011 at IRILL If you are a GNU user, enthusiast, or

contributor of any kind, feel free to come. I guess you can still

drop an email to ghm-registration@gnu.org.

For folks around on Wednesday (yeah, that's tomorrow), we are having a

dinner around 8 PM at the Mussuwam, a Senegalese restaurant in Paris, near Place

d'Italie. When you get there, just give them the secret password

(which is 'GNU') and they'll show you were the rest of the crowd sits.

Be sure to keep that password secret though. No one else should be in

the know.

Happy hacking and I hope to see you guys there.

04 July 2011

Going to RMLL (LSM) and Debconf!

Next week, I’ll head to Strasbourg for Rencontres Mondiales du Logiciel Libre 2011. On monday morning, I’ll be giving my Debian Packaging Tutorial for the second time. Let’s hope it goes well and I can recruit some future DDs!

Then, at the end of July, I’ll attend Debconf again. Unfortunately, I won’t be able to participate in Debcamp this year, but I look forward to a full week of talks and exciting discussions. There, I’ll be chairing two sessions about Ruby in Debian and Quality Assurance.

17 February 2011

Recent Libgda evolutions

It’s been a long time since I blogged about Libgda (and for the matter since I blogged at all!). Here is a quick outline on what has been going on regarding Libgda for the past few months:

  • Libgda’s latest version is now 4.2.4
  • many bugs have been corrected and it’s now very stable
  • the documentation is now faily exhaustive and includes a lot of examples
  • a GTK3 branch is maintained, it contains all the modifications to make Libgda work in the GTK3 environment
  • the GdaBrowser and GdaSql tools have had a lot of work and are now both mature and stable
  • using the NSIS tool, I’ve made available a new Windows installer for the GdaBrowser and associated tools, available at http://www.gnome.org/~vivien/GdaBrowserSetup.exe. It’s only available in English and French, please test it and report any error.

In the next months, I’ll work on polishing even more the GdaBrowser tool which I use on a daily basis (and of course correct bugs).

21 March 2010

16 March 2010

Webkit fun, maths and an ebook reader

I have been toying with webkit lately, and even managed to do some pretty things with it. As a consequence, I haven’t worked that much on ekiga, but perhaps some of my experiments will turn into something interesting there. I have an experimental branch with a less than fifty lines patch… I’m still trying to find a way to do more with less code : I want to do as little GObject-inheritance as possible!

That little programming was done while studying class field theory, which is pretty nice on the high-level principles and somewhat awful on the more technical aspects. I also read again some old articles on modular forms, but I can’t say that was “studying” : since it was one of the main objects of my Ph.D, that came back pretty smoothly…

I found a few minutes to enter a brick-and-mortar shop and have a look at the ebook readers on display. There was only *one* of them : the sony PRS-600. I was pretty unimpressed : the display was too dark (because it was a touch screen?), but that wasn’t the worse deal breaker. I inserted an SD card where I had put a sample of the type of documents I read : they showed up as a flat list (pain #1), and not all of them (no djvu) (pain #2) and finally, one of them showed up too small… and ended up fully unreadable when I tried to zoom (pain #3). I guess that settles the question I had on whether my next techno-tool would be a netbook or an ebook reader… That probably means I’ll look more seriously into fixing the last bug I reported on evince (internal bookmarks in documents).

24 February 2010

Renouveau dans ma vie professionnelle

Bonjour à tous,

je vous délaisse depuis quelques temps. Est-ce le temps qui fait cela, une période dans ma vie ou simplement autre chose, je n'en ai pas la moindre idée.

Je tenais juste à vous annoncer que je vais quitter mon employeur actuel qui est un Agence Gouvernementale pour chercher de l'expérience dans le secteur privé. En effet, je suis de plus en plus déçu par l'Administration.

Depuis quelques années, comme vous le savez, je me passionne pour la sécurité de l'Information. Ceci ajouté à une formation en Management de la Sécurité de l'Information, j'ai l'ambition de faire valoir mes expériences auprès d'un employeur (à définir) qui pourrait me permettre de les améliorer tout en lui faisant bénéficier de mes compétences.

Si vous avez de bonnes adresses, je suis preneur évidemment. ^^

16 January 2010

New Libgda releases

With the beginning of the year comes new releases of Libgda:

  • version 4.0.6 which contains corrections for the stable branch
  • version 4.1.4, a beta version for the upcoming 4.2 version

The 4.1.4′s API is now considered stable and except for minor corrections should not be modified anymore.

This new version also includes a new database adaptator (provider) to connect to databases through a web server (which of course needs to be configured for that purpose) as illustrated by the followin diagram:

WebProvider usage

The database being accessed by the web server can be any type supported by the PEAR::MDB2 module.

The GdaBrowser application now supports defining presentation preferences for each table’s column, which are used when data from a table’s column need to be displayed:
GdaBrowser table column's preferences
The UI extension now supports improved custom layout, described through a simple XML syntax, as shown in the following screenshot of the gdaui-demo-4.0 program:

Form custom layout

For more information, please visit the http://www.gnome-db.org web site.

08 January 2010

Attending XMPP Summit and FOSDEM, 5th-8th of February in Brussels

I'm going to FOSDEM, the Free and Open Source Software Developers' European MeetingFor the third year in a row, I’ll be flying to Brussels, Belgium next month to attend the XMPP Summit/FOSDEM combo. I didn’t look through the FOSDEM schedule yet but when it comes to XMPP, I’m looking forward to some discussions on Jingle Nodes and Publish-Subscribe. I’ve been working more and more with XMPP in the past months, especially hacking on ejabberd, and attending is a good motivation to get some of my Jingle Nodes related code shaped up on time. See you there!

30 December 2009

Rappel - Définition du Hacker

Le hacker est un passionné d'informatique, souvent très doué, dont les seuls objectifs sont de "bricoler" programmes et matériels (software et hardware) afin d'obtenir des résultats de qualité pour lui-même, pour l'évolution des technologies et pour la reconnaissance de ses pairs.

Les conventions de hackers sont des rassemblements où ces férus d'informatique se rencontrent, discutent et comparent leurs travaux.

Depuis de nombreuses années, la tendance est de confondre à tort le hacker avec le cracker, dont les buts ne sont pas toujours légaux.

Or, on ne le répétera jamais assez, les objectifs du hacker sont louables et contribuent de manière active aux progrès informatiques et aux outils que nous utilisons quotidiennement.

05 November 2009

Attracted to FLT

I have been a little stuck for some weeks : a new year started (no, that post hasn’t been stuck since january — scholar year start in september) and I have students to tend to. As I have the habit to say : good students bring work because you have to push them high, and bad students bring work because you have to push them from low! Either way, it has been keeping me pretty busy.

Still, I found the time to read some more maths, but got lost on something quite unrelated to my main objective : I just read about number theory and the ideas behind the proof of Fermat’s Last Theorem (Taylor and Wiles’ theorem now). That was supposed to be my second target! Oh, well, I’ll just try to hit my first target now (Deligne’s proof of the Weil conjectures). And then go back to FLT for a new and deeper reading.

I only played a little with ekiga’s code — mostly removing dead code. Not much : low motivation.

15 October 2009

gwt-strophe 0.1.0 released

I just released the first version of gwt-strophe, GWT bindings for the Strophe XMPP library. Nothing much to say else than it is pretty young, with all that can imply. The project is hosted at https://launchpad.net/gwt-strophe

11 July 2009

Slides from RMLL (and much more)

So, I’m back from the Rencontres Mondiales du Logiciel Libre, which took place in Nantes this year. It was great to see all those people from the french Free Software community again, and I look forward to seeing them again next year in Bordeaux (too bad the Toulouse bid wasn’t chosen).

The Debian booth, mainly organized by Xavier Oswald and Aurélien Couderc, with help from Raphaël, Roland and others (but not me!), got a lot of visits, and Debian’s popularity is high in the community (probably because RMLL is mostly for über-geeks, and Debian’s market share is still very high in this sub-community).

I spent quite a lot of time with the Ubuntu-FR crew, which I hadn’t met before. They do an awesome work on getting new people to use Linux (providing great docs and support), and do very well (much better than in the past) at giving a good global picture of the Free Software world (Linux != Ubuntu, other projects do exist and play a very large role in Ubuntu’s success, etc). It’s great to see Free Software’s promotion in France being in such good hands. (Full disclosure: I got a free mug (recycled plastic) with my Ubuntu-FR T-shirt, which might affect my judgement).

I gave two talks, on two topics I wanted to talk about for some time. First one was about the interactions between users, distributions and upstream projects, with a focus on Ubuntu’s development model and relationships with Debian and upstream projects. Second one was about voting methods, and Condorcet in particular. If you attended one of those talks, feedback (good or bad) is welcomed (either in comments or by mail). Slides are also available (in french):

On a more general note, I still don’t understand why the “Mondiales” in RMLL’s title isn’t being dropped or replaced by “Francophones“. Seeing the organization congratulate themselves because 30% of the talks were in english was quite funny, since in most cases, the english part of the talk was “Is there someone not understanding french? no? OK, let’s go on in french.“, and all the announcements were made in french only. Seriously, RMLL is a great (probably the best) french-speaking community event. But it’s not FOSDEM: different goals, different people. Instead of trying (and failing) to make it an international event, it would be much better to focus on making it a better french-speaking event, for example by getting more french-speaking developers to come and talk (you see at least 5 times more french-speaking developers in FOSDEM than in RMLL).

I’m now back in Lyon for two days, before leaving to Montreal Linux Symposium, then coming back to Lyon for three days, then Debconf from 23rd to 31st, and then moving to Nancy, where I will start as an assistant professor in september (a permanent (tenured) position).

26 February 2009

fatal: protocol error: expected sha/ref

Dear Lennart,

You should probably know that typing the correct URL would work better for cloning a bzr branch (yes a branch, not a repository).

This is what I get when I try to feed git a random invalid URL:

$ git clone git://github.com/idontexist
Initialized empty Git repository in /home/asabil/Desktop/idontexist/.git/
fatal: protocol error: expected sha/ref, got ‘

No matching repositories found.


Now is probably the time to stop this non constructive “my DVCS is better than yours”, and focus on writing code and fixing bugs.

19 November 2008

19 Nov 2008

WOW ... Four fucking years without blogging in my advogado's page. I needed times to put my head and my body in the right place. Four years of doubt, sadness and Happiness as well. So since a few days, I decided to blog again.

It's all for the moment :)

22 July 2008

Looking for a job

On September I finish my studies of computer science, so I start to search a job. I really enjoyed my current job at Collabora maintaining Empathy, I learned lots of things about the Free Software world and I would like to keep working on free software related projects if possible. My CV is available online here.

Do you guys know any company around the free software and GNOME looking for new employees? You can contact me by email to xclaesse@gmail.com

22 April 2008

Enterprise Social Search slideshow

Enterprise Social Search is a way to search, manage, and share information within a company. Who can help you find relevant information and nothing but relevant information? Your colleagues, of course

Today we are launching at Whatever (the company I work for) a marketing campaign for our upcoming product: Knowledge Plaza. Exciting times ahead!

28 January 2008

Ubuntu stable updates

There was some blog entries this week about GNOME stable updates on Ubuntu. There is no reason new bug fix versions could not be uploaded to stable out of the fact that the SRU rules require to check carrefully all the changes and doing this job on all the GNOME tarballs is quite some work, or the ubuntu desktop team is quite small and already overworked.

There is a list of packages which have a relaxed rules though, we have discussed adding GNOME to those since the stable serie usually has fixes worth having and not too many unstable changes (though the stable SVN code usually doesn’t get lot of testing) and decided than the stable updates which look reasonable should be uploaded to hardy-update.

There was also some concerns about gnome-games, 2.20.3 has been uploaded to gutsy-proposed today which should reduce the number of bugs sent to the GNOME bugzilla. The new dependencies on ggz has also been reviewed and 2.21 should be built soon in hardy.

14 November 2007

GNOME and Ubuntu

The FOSSCamp and UDS week has been nice and a good occasion to talk to upstream and people from other distributions. We had desktop discussions about the new technologies landing in GNOME this cycle (the next Ubuntu will be a LTS so we need a balance between new features and stability), the desktop changes we want to do, and how Ubuntu contributes to GNOME.

Some random notes about the Ubuntu upstream contributions:

  • Vincent asked again for an easy way to browse the Ubuntu patches and Scott picked up the task, the result is available there
  • The new Canonical Desktop Team will focus on making the user experience better, most of the changes will likely be upstream material and discussed there, etc
  • Canonical has open Ubuntu Desktop Infrastructure Developer and Ubuntu Conceptual Interface Designer positions, if you want to do desktop work for a cool open source company you might be interested by those ;-)

GNOME updates in gutsy and hardy

  • Selected GNOME 2.20.1 changes have been uploaded to gutsy-updates
  • The GNOME 2.21.2 packaging has started in hardy, some updates and lot of Debian merges are still on the TODO though
  • We have decided to use tags in patches to indicate the corresponding Ubuntu and upstream bugs so it’s easier to get the context of the change, technical details still need to be discussed though

Update: Scott pointed that you can use http://patches.ubuntu.com/n/nautilus/extracted to access to the current nautilus version

03 November 2007

git commit / darcs record

I’ve been working wit git lately but I have also missed the darcs user interface. I honestly think the darcs user interface is the best I’ve ever seen, it’s such a joy to record/push/pull (when darcs doesn’t eat your cpu) :)

I looked at git add --interactive because it had hunk-based commit, a pre-requisite for darcs record-style commit, but it has a terrible user interface, so i just copied the concept: running a git diff, filtering hunks, and then outputing the filtered diff through git apply --cached.

It supports binary diffs, file additions and removal. It also asks for new files to be added even if this is not exactly how darcs behave but I always forget to add new files, so I added it. It will probably break on some extreme corner cases I haven’t been confronted to, but I gladly accept any patches :)

Here’s a sample session of git-darcs-record script:

$ git-darcs-record
Add file:  newfile.txt
Shall I add this file? (1/1) [Ynda] : y

Binary file changed: document.pdf

Shall I record this change? (1/7) [Ynda] : y

@@ -1,3 +1,5 @@

Shall I record this change? (2/7) [Ynda] : y

@@ -1,17 +1,5 @@
 #!/usr/bin/env python

-# git-darcs-record, emulate "darcs record" interface on top of a git repository
-# Usage:
-# git-darcs-record first asks for any new file (previously
-#    untracked) to be added to the index.
-# git-darcs-record then asks for each hunk to be recorded in
-#    the next commit. File deletion and binary blobs are supported
-# git-darcs-record finally asks for a small commit message and
-#    executes the 'git commit' command with the newly created
-#    changeset in the index
 # Copyright (C) 2007 Raphaël Slinckx
 # This program is free software; you can redistribute it and/or

Shall I record this change? (3/7) [Ynda] : y

@@ -28,6 +16,19 @@
 # along with this program; if not, write to the Free Software
 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.

+# git-darcs-record, emulate "darcs record" interface on top of a git repository
+# Usage:
+# git-darcs-record first asks for any new file (previously
+#    untracked) to be added to the index.
+# git-darcs-record then asks for each hunk to be recorded in
+#    the next commit. File deletion and binary blobs are supported
+# git-darcs-record finally asks for a small commit message and
+#    executes the 'git commit' command with the newly created
+#    changeset in the index
 import re, pprint, sys, os

 BINARY = re.compile("GIT binary patch")

Shall I record this change? (4/7) [Ynda] : n

@@ -151,16 +152,6 @@ def read_answer(question, allowed_responses=["Y", "n", "d", "a"]):
        return resp

-def setup_git_dir():
-       global GIT_DIR
-       GIT_DIR = os.getcwd()
-       while not os.path.exists(os.path.join(GIT_DIR, ".git")):
-               GIT_DIR = os.path.dirname(GIT_DIR)
-               if GIT_DIR == "/":
-                       return False
-       os.chdir(GIT_DIR)
-       return True
 def git_get_untracked_files():

Shall I record this change? (5/7) [Ynda] : y

# On branch master
# Changes to be committed:
#   (use "git reset HEAD file..." to unstage)
#       modified:   document.pdf
#       modified:   foobar.txt
#       modified:   git-darcs-record
#       new file:   newfile.txt
# Changed but not updated:
#   (use "git add file file..." to update what will be committed)
#       modified:   git-darcs-record
What is the patch name? Some cute patch name
Created commit a08f34e: Some cute patch name
 4 files changed, 3 insertions(+), 29 deletions(-)
 create mode 100644 newfile.txt

Get the script here: git-darcs-record script and put in somewhere in your $PATH. Any comments or improvements is welcome !

22 January 2007

Un nouveau laptop, sans windows !

Voilà, j’y pensais depuis longtemps et c’est maintenant chose faite, je me suis acheté un tout nouveau ordinateur portable.

Je l’ai acheté sur le site français LDLC.com et me suis renseigné pour savoir si il était possible d’acheter les ordinateurs de leur catalogue sans logiciels (principalement sans windows). Je leur ai donc envoyé un email, et à ma grande surprise ils m’on répondu que c’était tout a fait possible, qu’il suffi de passer commande et d’envoyer ensuite un email pour demander de supprimer les logiciels de la commande. J’ai donc commandé mon laptop et ils m’ont remboursé de 20€ pour les logiciels, ce n’est pas énorme sur le prix d’un portable, mais symboliquement c’est déjà ça.

Toutes fois je me pose des questions, pourquoi cette offre n’est pas inscrite sur le site de LDLC ? En regardant sous mon tout nouveau portable je remarque une chose étrange, les restes d’un autocollant qu’on a enlevé, exactement à l’endroit où habituellement est collé la clef d’activation de winXP. Le remboursement de 20€ tout rond par LDLC me semble également étrange vue que LDLC n’est qu’un intermédiaire, pas un constructeur, et donc eux achètent les ordinateurs avec windows déjà installé. Bref tout ceci me pousse à croire que c’est LDLC qui perd les 20€ et je me demande dans quel but ?!? Pour faire plaisir aux clients libre-istes ? Pour éviter les procès pour vente liée ? Pour à leur tours se faire rembourser les licences que les clients n’ont pas voulu auprès du constructeur/Microsoft et éventuellement gagner plus que 20€ si les licences OEM valent plus que ça ? Bref ceci restera sans doutes toujours un mistère.

J’ai donc installé Ubuntu qui tourne plutôt bien. J’ai été même très impressionné par le network-manager qui me connecte automatiquement sur les réseaux wifi ou filaire selon la disponibilité et qui configure même un réseau zeroconf si il ne trouve pas de server dhcp, c’est très pratique pour transférer des données entre 2 ordinateurs, il suffi de brancher un cable ethernet (ça marche aussi par wifi mais j’ai pas encore testé) entre les 2 et hop tout le réseau est configuré automatiquement sans rien toucher, vraiment magique ! Windows peut aller se cacher, ubuntu est largement plus facile d’utilisation !

20 December 2006

Documenting bugs

I hate having to write about bugs in the documentation. It feels like waving a big flag that says ‘Ok, we suck a bit’.

Today, it’s the way fonts are installed, or rather, they aren’t. The Fonts folder doesn’t show the new font, and the applications that are already running don’t see them.

So I’ve fixed the bug that was filed against the documentation. Now it’s up to someone else to fix the bugs in Gnome.

05 December 2006

Choice and flexibility: bad for docs

Eye of Gnome comes with some nifty features like support for EXIF data in jpegs. But this depends on a library that isn’t a part of Gnome.

So what do I write in the user manual for EOG?

‘You can see EXIF data for an image, but you need to check the innards of your system first.’
‘You can maybe see EXIF data. I don’t know. Ask your distro.’
‘If you can’t see EXIF data, install the libexif library. I’m sorry, I can’t tell you how you can do that as I don’t know what sort of system you’re running Gnome on.’

The way GNU/Linux systems are put together is perhaps great for people who want unlimited ability to customize and choose. But it makes it very hard to write good documentation. In this sort of scenario, I would say it makes it impossible, and we’re left with a user manual that looks bad.

I’ve added this to the list of use cases for Project Mallard, but I don’t think it’ll be an easy one to solve.


Planète GNOME-FR

Planète GNOME-FR est un aperçu de la vie, du travail et plus généralement du monde des membres de la communauté GNOME-FR.

Certains billets sont rédigés en anglais car nous collaborons avec des gens du monde entier.

Dernière mise à jour :
10 October 2015 à 12:00 UTC
Toutes les heures sont UTC.


Planète GNOME-FR est propulsée par l'agrégateur Planet, cron, Python, Red Hat (qui héberge ce serveur).

Le design du site est basé sur celui des sites GNOME et de Planet GNOME.

Planète GNOME-FR est maintenue par Frédéric Péters et Luis Menina. Si vous souhaitez ajouter votre blog à cette planète, il vous suffit d'ouvrir un bug. N'hésitez pas à nous contacter par courriel pour toute autre question.