30 July 2015

Mise à jour des GNOME Human Interface Guidelines

Logo GNOME HIGAllan Day, de l’équipe de design de GNOME, est en train de travailler sur la première grosse mise à jour du guide des bonnes pratiques pour l’interface homme machine de GNOME 3 (GNOME Human Interface Guidelines). Cette dernière inclura une réorganisation de la documentation, ainsi qu’une augmentation du nombre de modèles.

Parmi les autres changements notables, nous pourrons noter l’amélioration de la navigation. L’introduction a été raccourcie, permettant ainsi de pouvoir accéder plus rapidement aux parties intéressantes. La première page offrira désormais un bien meilleur aperçu, et les pages concernant les modèles et autres composants d’interface ont également été grandement améliorées.

Un autre ajout intéressant concerne le lien entre le guide et la documentation de référence de l’API GTK+ pour les différents composants. C’est plus agréable pour savoir ce que fait chaque composant, tout en permettant au guide d’accompagner plus efficacement le toolkit.

Le travail étant toujours en cours, la version finale sera publiée en même temps que GNOME 3.18.

29 July 2015

Lors d’une recherche, Fichiers basculera sur une vue en liste

Recherche avec vue en liste dans Fichiers 3.16

La vue en liste offrant bien plus d’informations (taille, emplacement…), la prochaine version du gestionnaire de fichiers basculera automatiquement vers cette dernière lorsque l’utilisateur effectuera une recherche. Ensuite, une fois la recherche terminée et si c’était le choix initial de l’utilisateur, Fichiers rebasculera vers la vue en icônes.

22 July 2015

Arduino support and various fixes in Ubuntu Make 0.9

A warm summer has started in some part of the world and holidays: beach and enjoying various refreshements!

However, the unstoppable Ubuntu Make team wasn't on a pause and we continued making improvements thanks to the vibrant community around it!

What's new in this release? First Arduino support has been added with the vast majority of work done by Tin Tvrtković. Thanks to him for this excellent work! To be able to install arduino IDE, just run:

$ umake ide arduino

Note that your user will eventually be added to the right unix group if it was not already in. In that case, it will ask you to login back to be able to communicate with your arduino device. Then, you can enjoy the arduino IDE:

Arduino IDE

Some other hilights from this release is the deprecation of the Dart Editor framework and replacement by Dart SDK one. As of Dartlang 1.11, the Dart Editor isn't supported and bundled anymore (it still exists as an independent eclipse plugin though). We thus marked the Dart Editor framework for removal only and added this Dart SDK (adding the SDK to the user's PATH) instead. This is the new default for the Dart category.

Thanks to our extensive tests, we saw that the 32 bits of Visual Studio Code page changed and wasn't thus installable anymore. It's as fixed as of this release.

A lot of other improvements (in particular in the tests writing infra and other minor fixes) are also part of 0.9. A more detailed changelog is available here.

0.9 is already available in Wily, and through its ppa, for 14.04 LTS and 15.04 ubuntu releases! Get it while it's warmhot!

22 June 2015

Juniper vSRX (Firefly Perimeter) on Proxmox VE

Juniper provides a JUNOS, based on the one used by the SRX series, than can be used in a virtual machine. That product is great for Juniper users that want to play with their favorite network OS and also for people who would like to discover the JUNOS world.

Juniper is providing images for VMware and KVM based hypervisors. As Proxmox VE user you know that it uses KVM to get things done. So, having Firefly Perimeter working on Proxmox VE should be doable without much troubles. But here are the steps to get things working.

Downloading vSRX (Firefly Perimeter)

To setup vSRX on Proxmox VE we need to download the JVA file provided by Juniper. This file is an archive containing the KVM VM definition and the QCOW2 disk of the VM.

Preparing the VM

We then need to create a VM with the following characteristics (see also the end of this article):

  • OS: Other OS types (other)
  • CD/DVD: Do not use any media
  • Hard Disk: IDE 0, size of 2 GB, QCOW2 format
  • CPU: at least one socket and 2 cores, type KVM64 (default on latest versions of Proxmox VE)
  • Memory: 1024 MB are recommended
  • Network: how many interfaces as we want, just use Intel E1000 as model for interfaces

Using the vSRX Disk

Now that the VM definition has been created, we need to use the disk provided in the JVA file. For that we first need to extract it.

# bash junos-vsrx-12.1X47-D10.4-domestic.jva -x

The disk will be available in the directory that has been created. We justneed to copy the disk to replace the one used by the VM (replace VMID by the ID of your VM).

# cp junos-vsrx-12.1X47-D10.4-domestic.img /var/lib/vz/images/VMID/vm-VMID-1.qcow2

With this, the VM is now bootable and JUNOS will load properly, we will not be able to use it though. For that we need to find a way to send the serial output to the Proxmox VE's noVNC console.

Getting the serial output in the Proxmox VE console

First we need to find where our VM definition is stored. Usually it is under /etc/pve/nodes/NODENAME/qemu-server/VMID.conf (replace NODENAME and VMID with your owns). But we can use a command like the following:

# find / -name 'VMID.conf'

Then we can edit the VM definition file:

# vim /etc/pve/nodes/NODENAME/qemu-server/VMID.conf

And we have to add the following line in the configuration:

args: -serial tcp:localhost:6000,server,nowait

And eventually, we need to change the VM display to use Cirrus Logic GD 5446 (cirrus) via the Proxmox VE web interface or just by adding vga: cirrus in the VM definition.

The End

We can now just start the VM, the output will be displayed in the Proxmox VE's console. Enjoy using JUNOS with virtual machines.

vsrx-promox.png

Edit (2015-06-17):

After some tests I was glad to see that both disk and network interfaces can use the VIRTIO drivers. I would recommend to use this type of drivers since it is supposed to improve the scheduling on the hypervisor level.

02 June 2015

Tuning systemd services

Recently my tor relay started crashing daily. I found out it was because the usage increased (approaching 10MB/s) and every night when logrotate asked it to reload, it failed with:

[May 30 04:02:01.000 [notice] Received reload signal (hup). Reloading config and resetting internal state.
May 30 04:02:01.000 [warn] Could not open "/etc/tor/torrc": Too many open files
May 30 04:02:01.000 [warn] Unable to open configuration file "/etc/tor/torrc".
May 30 04:02:01.000 [err] Reading config failed--see warnings above. For usage, try -h.
May 30 04:02:01.000 [warn] Restart failed (config error?). Exiting.
May 30 04:02:01.000 [warn] Couldn't open "/var/lib/tor/state.tmp" (/var/lib/tor/state) for writing: Too many open files

The problems comes from LimitNOFILE=4096 in the service file, and I had no idea how to fix it cleanly.

fcrozat gave me the answer which I'll summarize as:

mkdir /etc/systemd/system/tor.service.d/
echo [Service]\nLimitNOFILE=16384" > /etc/systemd/system/tor.service.d/limit.conf
systemctl daemon-reload
service tor restart

25 May 2015

SUSE Ruling the Stack in Vancouver

Rule the Stack

Last week during the the OpenStack Summit in Vancouver, Intel organized a Rule the Stack contest. That's the third one, after Atlanta a year ago and Paris six months ago. In case you missed earlier episodes, SUSE won the two previous contests with Dirk being pretty fast in Atlanta and Adam completing the HA challenge so we could keep the crown. So of course, we had to try again!

For this contest, the rules came with a list of penalties and bonuses which made it easier for people to participate. And indeed, there were quite a number of participants with the schedule for booking slots being nearly full. While deploying Kilo was a goal, you could go with older releases getting a 10 minutes penalty per release (so +10 minutes for Juno, +20 minutes for Icehouse, and so on). In a similar way, the organizers wanted to see some upgrade and encouraged that with a bonus that could significantly impact the results (-40 minutes) — nobody tried that, though.

And guess what? SUSE kept the crown again. But we also went ahead with a new challenge: outperforming everyone else not just once, but twice, with two totally different methods.

For the super-fast approach, Dirk built again an appliance that has everything pre-installed and that configures the software on boot. This is actually not too difficult thanks to the amazing Kiwi tool and all the knowledge we have accumulated through the years at SUSE about building appliances, and also the small scripts we use for the CI of our OpenStack packages. Still, it required some work to adapt the setup to the contest and also to make sure that our Kilo packages (that were brand new and without much testing) were fully working. The clock result was 9 minutes and 6 seconds, resulting in a negative time of minus 10 minutes and 54 seconds (yes, the text in the picture is wrong) after the bonuses. Pretty impressive.

But we also wanted to show that our product would fare well, so Adam and I started looking at this. We knew it couldn't be faster than the way Dirk picked, and from the start, we targetted the second position. For this approach, there was not much to do since this was similar to what he did in Paris, and there was work to update our SUSE OpenStack Cloud Admin appliance recently. Our first attempt failed miserably due to a nasty bug (which was actually caused by some unicode character in the ID of the USB stick we were using to install the OS... we fixed that bug later in the night). The second attempt went smoother and was actually much faster than we had anticipated: SUSE OpenStack Cloud deployed everything in 23 minutes and 17 seconds, which resulted in a final time of 10 minutes and 17 seconds after bonuses/penalties. And this was with a 10 minutes penalty due to the use of Juno (as well as a couple of minutes lost debugging some setup issue that was just mispreparation on our side). A key contributor to this result is our use of Crowbar, which we've kept improving over time, and that really makes it easy and fast to deploy OpenStack.

Wall-clock time for SUSE OpenStack Cloud

Wall-clock time for SUSE OpenStack Cloud

These two results wouldn't have been possible without the help of Tom and Ralf, but also without the whole SUSE OpenStack Cloud team that works on a daily basis on our product to improve it and to adapt it to the needs of our customers. We really have an awesome team (and btw, we're hiring)!

For reference, three other contestants succeeded in deploying OpenStack, with the fastest of them ending at 58 minutes after bonuses/penalties. And as I mentioned earlier, there were even more contestants (including some who are not vendors of an OpenStack distribution), which is really good to see. I hope we'll see even more in Tokyo!

Results of the Rule the Stack contest

Results of the Rule the Stack contest

Also thanks to Intel for organizing this; I'm sure every contestant had fun and there was quite a good mood in the area reserved for the contest.

Update: See also the summary of the contest from the organizers.

22 May 2015

iio-sensor-proxy 1.0 is out!

Modern (and some less modern) laptops and tablets have a lot of builtin sensors: accelerometer for screen positioning, ambient light sensors to adjust the screen brightness, compass for navigation, proximity sensors to turn off the screen when next to your ear, etc.

Enabling

We've supported accelerometers in GNOME/Linux for a number of years, following work on the WeTab. The accelerometer appeared as an input device, and sent kernel events when the orientation of the screen changed.

Recent devices, especially Windows 8 compatible devices, instead export a HID device, which, under Linux, is handled through the IIO subsystem. So the first version of iio-sensor-proxy took readings from the IIO sub-system and emulated the WeTab's accelerometer: a few too many levels of indirection.

The 1.0 version of the daemon implements a D-Bus interface, which means we can support more than accelerometers. The D-Bus API, this time, is modelled after the Android and iOS APIs.

Enjoying

Accelerometers will work in GNOME 3.18 as well as it used to, once a few bugs have been merged[1]. If you need support for older versions of GNOME, you can try using version 0.1 of the proxy.


Orientation lock in action


As we've adding ambient light sensor support in the 1.0 release, time to put in practice best practice mentioned by Owen's post about battery usage. We already had code like that in gnome-power-manager nearly 10 years ago, but it really didn't work very well.

The major problem at the time was that ambient light sensor reading weren't in any particular unit (values had different meanings for different vendors) and the user felt that they were fighting against the computer for the control of the backlight.

Richard fixed that though, adapting work he did on the ColorHug ALS sensor, and the brightness is now completely in the user's control, and adapts to the user's tastes. This means that we can implement the simplest of UIs for its configuration.

Power saving in action

This will be available in the upcoming GNOME 3.17.2 development release.

Looking ahead

For future versions, we'll want to export the raw accelerometer readings, so that applications, including games, can make use of them, which might bring up security issues. SDL, Firefox, WebKit could all do with being adapted, in the near future.

We're also looking at adding compass support (thanks Elad!), which Geoclue will then export to applications, so that location and heading data is collected through a single API.

Richard and Benjamin Tissoires, of fixing input devices fame, are currently working on making the ColorHug-ALS compatible with Windows 8, meaning it would work out of the box with iio-sensor-proxy.

Links

We're currently using GitHub for bug and code tracking. Releases are mirrored on freedesktop.org, as GitHub is known to mangle filenames. API documentation is available on developer.gnome.org.

[1]: gnome-settings-daemon, gnome-shell, and systemd will need patches

12 May 2015

Deploying Docker for OpenStack with Crowbar

A couple of months ago, I was meeting colleagues of mine working on Docker and discussing about how much effort it would be to add support for it to SUSE OpenStack Cloud. It's been something that had been requested for a long time by quite a number of people and we never really had time to look into it. To find out how difficult it would be, I started looking at it on the evening; the README confirmed it shouldn't be too hard. But of course, we use Crowbar as our deployment framework, and the manual way of setting it up is not really something we'd want to recommend. Now would it be "not too hard" or just "easy"? There was only way to know that... And guess what happened next?

It took a couple of hours (and two patches) to get this working, including the time for packaging the missing dependencies and for testing. That's one of the nice things we benefit from using Crowbar: adding new features like this is relatively straight-forward, and so we can enable people to deploy a full cloud with all of these nice small features, without requiring them to learn about all the technologies and how to deploy them. Of course this was just a first pass (using the Juno code, btw).

Fast-forward a bit, and we decided to integrate this work. Since it was not a simple proof of concept anymore, we went ahead with some more serious testing. This resulted in us backporting patches for the Juno branch, but also making Nova behave a bit better since it wasn't aware of Docker as an hypervisor. This last point is a major problem if people want to use Docker as well as KVM, Xen, VMware or Hyper-V — the multi-hypervisor support is something that really matters to us, and this issue was actually the first one that got reported to us ;-) To validate all our work, we of course asked tempest to help us and the results are pretty good (we still have some failures, but they're related to missing features like volume support).

All in all, the integration went really smoothly :-)

Oh, I forgot to mention: there's also a docker plugin for heat. It's now available with our heat packages now in the Build Service as openstack-heat-plugin-heat_docker (Kilo, Juno); I haven't played with it yet, but this post should be a good start for anyone who's curious about this plugin.

30 April 2015

5 Humanitarian FOSS projects

Over on opensource.com, I just posted an article on 5 humanitarian FOSS projects to watch, another instalment in the humanitarian FOSS series the site is running. The article covers worthy projects Literacy Bridge, Sahana, HOT, HRDAG and FrontlineSMS.

A few months ago, we profiled open source projects working to make the world a better place. In this new installment, we present some more humanitarian open source projects to inspire you.

Read more

Ubuntu Make 0.7 released with Visual Studio Code support

If you followed recent news, yesterday Microsoft announced Visual Studio Code support on stage during their Build conference. One of the nice surprise was that this new IDE, focused on web and cloud platforms, is available on Mac OS X and of course, on Linux! Some screenshots were presented at the conference with Visual Studio Code running on Ubuntu in an Unity Session.

This sounded like a nice opportunity for Ubuntu Make to shine again, and we just added this new support! And yeah, it's a snappy feeling to get it delivered as fast! This release of course brings as well the required non regression large and medium tests to ensure we can track that the installation is working as expected as time pass by and detect any server-side or client-side regression.

To install it, just run:

$ umake web visual-studio-code

Here is the required screenshot of a fresh Visual Studio Code installation with Ubuntu Make!

Visual Studio Code

You can get Ubuntu Make 0.7 through its ppa, for the 14.04 LTS, 14.10 and 15.05 ubuntu releases.

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! For all the various form of contributions and how to give an hand, you can refer to this post!

28 April 2015

Bricolage de printemps

C'est le printemps et comme les livres commençaient à faire des piles un peu trop hautes sur la cheminée, ça a été la grande occasion pour me mettre au bricolage d'un meuble.

Cheminée absorbant un surplus de bouquins

Cheminée absorbant un surplus de bouquins, 8 avril 2015

Au début il y avait quelques ambitions côté menuiserie (la faute à Federico) mais j'ai rapidement réduit celles-ci.

Établi improvisé dans la cuisine

Établi improvisé dans la cuisine, 25 avril 2015

Et au final, une fois le tout assemblé in situ, et modulo le baffle qui pèse un peu, ça donne l'impression d'être plutôt droit (quand on ne regarde pas de trop près).

Bibliothèque assemblée et déjà bien remplie

Bibliothèque assemblée et déjà bien remplie, 26 avril 2015

Prochaine étape, l'autre côté de la cheminée, pour les disques, et déjà anticiper un espace pour les livres à venir. (avant l'année prochaine, peut-être)

(PS : félicitations à Debian pour Jessie, et à Luis)

Mise à jour, confiant dans les planches sous le baffle, mais pas trop, j'ai quand même ajouté un tasseau dessous pour aider à soutenir…

Bibliothèque avec le tasseau en plus

Bibliothèque avec le tasseau en plus, 28 avril 2015

15 April 2015

Be IP is hiring!

In case some readers of this blog would be interested in working with Open Source software and VoIP technologies, Be IP (http://www.beip.be) is hiring a developer. Please see http://www.beip.be/BeIP-Job-Offer.pdf for the job description. You can contact me directly.

14 April 2015

Hackweek 12: improving GNOME password management, day 1

This week is Hackweek 12 at SUSE

My hackweek project is improving GNOME password management, by investigating password manager integration in GNOME.

Currently, I'm using LastPass which is a cloud-based password management system.

It has a lot of very nice features, such as:
  • 2 factor authentication
  • Firefox and Chrome integration
  • Linux support
  • JS web client with no install required, when logging from a unknown system (I never needed it myself)
  • Android integration (including automatic password selection for applications)
  • cli open-source client (lastpass-cli), allowing to extract account specific information
  • encrypted data (nothing is stored unencrypted server side)
  • strong-password generator
  • support encrypted notes (not only password)
  • server based (clients sync) with offline operations supported
     
However, it also has several drawbacks:
  • closed-source
  • subscription based (required for Android support)
  • can't be hosted on my own server
  • doesn't integrate at all with GNOME desktop
I don't want to reinvent the wheel (unless it is really needed), which is why I spend my first day at searching the various password managers available on Linux and compare their features (and test them a bit).

So far, I found the following programs:
  • KeePass (GPL):
    • version 1.x written in Java, still supported, not actively developed
    • version 2.x written in C# (Windows oriented), works with Mono under Linux
    • UI feels really Windows-like
    • DB format change between v1 and v2
    • supports encrypted notes
    • password generator
    • supports plugins ( a lot are available)
    • support OTP (keeotp plugin, provide 2factor auth through TOTP, HTOP built-in)
    • shared db editing
    • support yubikey (static or hotp)
    • 2 Firefox extension available(keefox, passifox)
    • 3 android applications available (one application KeePass2Android supports alternative keyboard, KeepShare supports alternative keyboard + a11y framework to fill android application forms, like LastPass)
    • Chrome extension available
    • JS application available
    • CLI available
    • big ecosystem of plugins and other applications able to process file format

  • KeePassX (GPL)
    • Qt4 "port" of KeePass (feels more a Linux application than KeePass)
    • alpha version for DB v2 support
    • missing support for OTP
    • missing support for keypasshttp (required by firefox extensions to discuss with main application), support is being done in a separate branch by a contributor, not merged
    • release are very scarse (latest release is April 2014, despite commits on git, very few people are contributing, according to git)
    • libsecret dbus support is being started by a contributor

  • Mitro:
    • company developped it was bought by Twitter last year, project released under GPL, no development since January.

  • Password Safe (Artistic license):
    • initially written by Bruce Schneier 
    • beta version available on Linux
    • written in wxWidgets 3.0 / C++
    • yubikey supported
    • android application available, no keyboard nor a11y framework usage, only use copy/paste (but allows sync of db with owncloud and other cloud platforms)
    • CLI available
    • 3 different DB formats (pre-2.0, 2.0, 3.0)
    • password history
    • no firefox extension and the "auto-type" built-in function is all but intuitive
    • support merge of various db

  • Encrypt:
    • same 0 knowledge framework as SpiderOak
    • node-js based

  • Pass:
    • simple script on top of text files / gnupg and optionnally git (used for history and can also be used to manage hosting the file)
    • not easy learning curve (CLI mostly), need gnupg to be setup before usage
    • one file per password entry, should make 
    • very basic Qt GUI available
    • basic FF extension available
    • basic android application available
Unfortunately, none of those applications properly integrates (yet) in GNOME (master password prompt, locking keyring when desktop is locked, etc..).

I've also looked at gnome-keyring integration with the various browsers:
  • Several extensions already exist, one is fully written in Javascript and is working nicely (port to libsecret is being investigated)
  • Chrome has already gnome-keyring and libsecret integration
  • Epiphany already works nicely with gnome-keyring
  • No password generator is available in Firefox / Chrome / Epiphany (nor GTK+ on a more generic basis)
Unfortunately, each browser is storing metadata in gnome-keyring for password entries in a slightly different format (fields name), causing password entries duplication and not allowing sharing keyring data across browsers.

Conclusions for this first day of hackweek:
  • Keepass file format seems to be the format of choice for password manager (a lot of applications written around it)
  • Password manager which would fit my requirement is KeePass but is written in Mono (I don't want Mono stack to come back on my desktops) and too Windows oriented, so not an option.
  • KeePassX seems to be the way to go (even if it is Qt based) but it lacks a lot of features and I'm not sure if it worth spending effort in adding those missing features there.
  • Pass is extremely simple (which would make hacking around it pretty straightforward) but requires a lot of work around it (android, GUI) to make it nicely integrated in GNOME.
I haven't yet made up my mind which solution is the best, but I'll probably spend the following days hacking around KeePassX (or a new program to wrap KP db into libsecret) and Firefox gnome-keyring integration.

Comments welcome, of course.

10 April 2015

Coffee: my personal history

As always, long time no blog. These days, I don't have enough energy (nor content, IMO) to write blog posts, mostly on Free Software, which would relevant for other people.

Why, would you ask ? Mostly because with my not-so-new-anymore position at SUSE (Enterprise Desktop Release Manager), I'm mostly working behind the scene (discovery the joy of OBS to create ISO images and lot of crazy similar stuff) which might not be that sexy to describe but still need to be done ;)

So, instead of closing this blog for new posts, I'm trying something new to me: writing about things which aren't Free Software but might still interest people:

My new thing these days (asks my wife ;) is coffee.



I've always been fond of coffee (and tea, they aren't mutually exclusive, fortunately), probably because when I was a child, my parents loved good coffee and I was happy to be the one taking care of both electric grinder and Expresso machine we had. And I remember how difficult it was to find good coffee, even more when you were living in a very rural area of France and when the only online services were accessible with a Minitel and were definitively not selling coffee ;)

Fast forward ten years, when I started to work in Paris, I was still into coffee and I discovered something which wasn't known at all at that time (it was in 2002 and George was still working in ER ;): Nespresso. This was a great thing (even if I was a bit worried by the closed system around it) because I was able to get a expresso at home which was always good (IMO at that time) and which also allowed me to switch between various coffees without any hassle (try that with several ground opened coffee bags when you are single and only drink one expresso per day ;)

And then started my love story with Nespresso, which has not ended (yet), with its ups (being part of a customer panel once, including UI designers, very interesting) and downs. I often skipped coffee in cafés and restaurants because I knew it wouldn't be good!

Nespresso Drinker
Fast forward again 10 years. We are in 2014. Caps war is on for few years in France, since some of Nespresso patents are in public domain and competitors are trying to get a share of this huge market (France is apparently one of the biggest markets for Nespresso). I've tried various alternative caps and most of them are just cheaper and not as good as the original caps, except one or two caps done by some "small" roasters (Terre de Café for instance). I ended up sticky with the original, until something better "happens".

And it has happened these days, somehow unexpectedly: for a few years, I was reading about strange devices (Aeropress being cited often) and tasty filter coffee (which, for me, as always been synonym of bad coffee) and I also heared some radio shows on coffee which make me think: let's try.
I ordered an Aeropress and tried it (with some fair trade coffee from my supermarket since I don't have any grinded coffee at home and opening caps wasn't really a good idea). Result: not bad, compared to the consistency of Nespresso but not that great. I knew I wasn't using great coffee.

The Aeropress
So, I decided to expand a bit more and searched for good coffee roasters in Paris. And one of those which was often mentionned is Coutume Café (their main website is not great ATM, better to look at their FB account), who also have a coffee shop. I went there, tried one of their coffee and I was astonished. This was the best ever coffee I ever tasted, with flavor like red fruits and chocolate. This was incredible and it wasn't even an expresso (which has been my reference for coffee) but filter coffee which looks like dishwater ;)



So, I'm now with this exact same coffee at home, waiting for delivery of a freshly ordered manual grinder to try to duplicate this coffee experience, because I try other coffee and other Paris roasters.

Let's see if I succeed :)

08 April 2015

GNU Cauldron 2015

This year the GNU Cauldron Conference is going to be held in Prague, Czech Republic, from August 7 to 9, 2015.

The GNU Cauldron Conference is a gathering of users and hackers of the GNU toolchain ecosystem.

Meaning that if you are interested in projects remotely related to the GNU C library, GNU Compiler Collection, the GNU Debugger or any toolchain runtime related project that has ties with the GNU system you are welcome!

If you are a Free Software project that is using the GNU Toolchain, would like your voice to be heard, hang out with other users and hackers of that space you are even more than welcome! If yo have crazy ideas you'd like to discuss over a nice beverage of your choice, please join!

You just have to send a nice note to tools-cauldron-admin@googlegroups.com saying that you are coming, and that would act as a registration. The number of seats is limited, so please do not drag your feet too much :-)

And if you want present a talk, well, there is a call for paper under way. You just have to sent your abstract to tools-cauldron-admin@googlegroups.com. The exact call for paper can be read here.

So see you there, gals'n guys!

02 April 2015

GNOME 3.16

GNOME 3.16 was released last week and is the result of more than 30000 commits by over 1000 persons, I am always impressed by those numbers, thank you all!

/captures/gnome-3.16-small.png

It comes with many improvements including a revamped notification system, Allan detailed the features on his blog, go read it for the whole story: In case you didn’t notice….

On the Brussels front Guillaume is so busy he forgot to organize a release party; he got back to his senses and it will happen, but still, we're losing ground here. So I went partying elsewhere, in this case in Lyon, as there was their annual free software event ("les JDLL"). We had a booth thanks to Bastien, and Mathieu was also there to help demonstrating and discussing GNOME.

GNOME booth at JDLL 2015

See you in six months!

JdLL 2015

Presentation and conferencing

Last week-end, in the Salle des Rancy in Lyon, GNOME folks (Fred Peters, Mathieu Bridon and myself) set up our booth at the top of the stairs, the space graciously offered by Ubuntu-FR and Fedora being a tad bit small. The JdLL were starting.

We gave away a few GNOME 3.14 Live and install DVDs (more on that later), discussed much-loved features, and hated bugs, and how to report them. A very pleasant experience all-in-all.



On Sunday afternoon, I did a small presentation about GNOME's 15 years. Talking about the upheaval, dragging kernel drivers and OS components kicking and screaming to work as their APIs say they should, presenting GNOME 3.16 new features and teasing about upcoming GNOME 3.18 ones.

During the Q&A, we had a few folks more than interested in support for tablets and convertible devices (such as the Microsoft Surface, and Asus T100). Hopefully, we'll be able to make the OS support good enough for people to be able to use any Linux distribution on those.

Sideshow with the Events box

Due to scheduling errors on my part, we ended up with the "v1" events box for our booth. I made a few changes to the box before we used it:

  • Removed the 17" screen, and replaced it with a 21" widescreen one with speakers builtin. This is useful when we can't setup the projector because of the lack of walls.
  • Upgraded machine to 1GB of RAM, thanks to my hoarding of old parts.
  • Bought a French keyboard and removed the German one (with missing keys), cleaned up the UK one (which still uses IR wireless).
  • Threw away GNOME 3.0 CDs (but kept the sleeves that don't mention the minor version). You'll need to take a sharpie to the small print on the back of the sleeve if you don't fill it with an OpenSUSE CD (we used Fedora 21 DVDs during this event).
  • Triaged the batteries. Office managers, get this cheap tester!
  • The machine's Wi-Fi was unstable, causing hardlocks (please test again if you use a newer version of the kernel/distributions). We tried to get onto the conference network through the wireless router, and installed DD-WRT on it as the vendor firmware didn't allow that.
  • The Nokia N810 and N800 tablets will going to kernel developers that are working on Nokia's old Linux devices and upstreaming drivers.
The events box is still in Lyon, until I receive some replacement hardware.

The machine is 7 years-old (nearly 8!) and only had 512MB of RAM, after the 1GB upgrade, the machine was usable, and many people were impressed by the speed of GNOME on a legacy machine like that (probably more so than a brand new one stuttering because of a driver bug, for example).

This makes you wonder what the use for "lightweight" desktop environments is, when a lot of the features are either punted to helpers that GNOME doesn't need or not implemented at all (old CPU and no 3D driver is pretty much the only use case for those).

I'll be putting it in a small SSD into the demo machine, to give it another speed boost. We'll also be needing a new padlock, after an emergency metal saw attack was necessary on Sunday morning. Five different folks tried to open the lock with the code read off my email, to no avail. Did we accidentally change the combination? We'll never know.

New project, ish

For demo machines, especially newly installed ones, you'll need some content to demo applications. This is my first attempt at uniting GNOME's demo content for release notes screenshots, with some additional content that's free to re-distribute. The repository will eventually move to gnome.org, obviously.

Thanks

The new keyboard and mouse, monitor, padlock, and SSD (and my time) were graciously sponsored by Red Hat.

01 February 2015

FOSDEM 2015

Deuxième et dernier jour du FOSDEM au stand GNOME.

La journée d'hier a été assez remplie, toujours beaucoup de présentations en parallèle, dont quelques unes intéressantes pour mon nouveau travail chez Anevia, comme celles sur Postgres, SPDX, et la gestion de licences open source en général.

Ici au stand GNOME, on voit comme chaque année des têtes connues, les membres locaux Belges comme fredp, cassidy, staz, les membres de la Fondation GNOME qui sont venus de l'autre bout du monde pour assister aux conférences ou aider au stand, et puis des tête françaises familières comme nicalsilva de Mozilla qui nous avait gentiment ouvert les portes des locaux parisiens de Mozilla pour le traducthon de l'an dernier.

Des nouvelles têtes aussi, dont un membre de l'Épitech qui aimerait organiser des présentations autour de GNOME aux étudiants. Nous avions déjà eu une demande de ce type (par Polytech Tours si mes souvenirs sont exacts), mais qui n'avait pas abouti. Les étudiants de l'Épitech utilisent dans le cadre de leur formation une openSUSE sous GNOME et sont donc déjà familiers avec notre environnement. Il y aurait donc potentiellement une oreille attentive et qui sait, de nouveaux contributeurs ? Espérons que ce contact débouchera sur quelque chose.Nous avions déjà évoqué dans GNOME-FR le fait d'avoir des présentations partagées pour justement le cadre éducatif.

De mon côté, j'aimerais faire avancer la traduction de la documentation développeur, qui est juste en très mauvais état pour l'instant. Une documentation en français, c'est important surtout pour les étudiants qui n'ont pas forcément encore une bonne maîtrise de l'anglais technique mais souhaiteraient apprendre. Si je n'arrive pas à m'en sortir tout seul, je pense qu'un nouveau Traducthon dédié à la documentation développeur pourrait être une solution.

Cette année, pas de T-shirts à vendre, ceux ci sont restés en Suède pour une autre conférence. Mais il nous avons quelques posters, sacs (épuisés), et nos stickers argentés, les bénéfices de ces derniers étant pour GNOME-FR. Nous en sommes à un peu plus de 200 stickers vendus ce week end.

(photo réalisée grâce à ma webcam moisie :-p)

N'hésitez pas à venir nous voir sur notre stand, jusqu'à ce soir, ou bien aux prochains événements de l'année, sans doute Solution Linux les 19 et 20 mai à Paris-La Défense !

29 January 2015

Samsung 840 EVO Performance fix

Several weeks ago Samsung has released a fix for the 840 series of their SSDs that had performance issues on long time stored data. While the fix procedure is quite simple to apply on Windows, when you use your SSD on a GNU/Linux powered system it can be quite tricky. So to fix your SSD you will need a bootable USB key with the Samsung binaries. Moreover the Samsung documentation is not really well written and can lead to confusion. So here are the steps to fix dear GNU/Linux users' SSDs.

Some preps

Firstly, prepare a USB key (at least 512 MB, just to be sure) and download FreeDOS.

Creating the bootable USB key

Once FreeDOS is on your computer, plug the USB key in and find the device to interact with it. You can generally find the device using the dmesg command. This will output something like this:

[1017607.068095] usb 2-1: new high-speed USB device number 110 using ehci-pci
[1017607.278127] usb 2-1: New USB device found, idVendor=1b1c, idProduct=1ab1
[1017607.278135] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[1017607.278140] usb 2-1: Product: Voyager
[1017607.278145] usb 2-1: Manufacturer: Corsair
[1017607.278150] usb 2-1: SerialNumber: AA00000000000634
[1017607.278936] usb-storage 2-1:1.0: USB Mass Storage device detected
[1017607.279084] scsi12 : usb-storage 2-1:1.0
[1017608.389828] scsi 12:0:0:0: Direct-Access Corsair Voyager 1100 PQ: 0 ANSI: 0 CCS
[1017608.390448] sd 12:0:0:0: Attached scsi generic sg2 type 0
[1017608.391272] sd 12:0:0:0: [sdb] 15663104 512-byte logical blocks: (8.01 GB/7.46 GiB)
[1017608.392259] sd 12:0:0:0: [sdb] Write Protect is off
[1017608.392266] sd 12:0:0:0: [sdb] Mode Sense: 43 00 00 00
[1017608.394784] sd 12:0:0:0: [sdb] No Caching mode page found
[1017608.394792] sd 12:0:0:0: [sdb] Assuming drive cache: write through
[1017608.402247]  sdb: sdb1
[1017608.405637] sd 12:0:0:0: [sdb] Attached SCSI removable disk

In this case you want to use the /dev/sdb dive as seen in the log.
Now you can just write the FreeDOS image on the USB disk. The image is compressed so you'll need to decompress it before.

$ bunzip2 FreeDOS-1.1-memstick-2-256M.img.bz2
$ dd if=FreeDOS-1.1-memstick-2-256MB.img of=/dev/sdb bs=512k

Copying the Samsung binaries

Download the Samsung binaries.

Mount the USB key and unzip those binaries at the USB key root. In this way you will be able to use them from FreeDOS later.

# mount /dev/sdb1 /mnt
# unzip Samsung_Performance_Restoration_USB_Bootable.zip
# mv 840Perf/* /mnt
# umount /mnt
# eject /dev/sdb

The fix

Plug the USB key in your machine and reboot the host. Do what is necessary to boot on the USB key. Choose the option 4 of FreeDOS "Load FreeDOS without driver".

Once FreeDOS is running just run the PERF.EXE file and the Samsung tool will start. Enter the index in front of the SSD you want to upgrade and fix. The utility will take care of everything (firmware upgrade and fix). Note that the pass to fix the SSD can take some time.

Once the tool has finished to fix your SSD just reboot the host by typing reboot in FreeDOS. Do not forget to unplug the USB key to avoid booting from it later.

Enjoy your brand new fixed SSD!

25 January 2015

Ekiga 5 – Progress Report

Current Status Ekiga 5 has progressed a lot lately. OpenHUB is reportin a High Activity for the project. The main reason behind this is that I am again dedicating much of my spare time to the project. Unfortunately, we are again facing a lack of contributions. Most probably (among others) because the project has been […]

18 December 2014

Kernel hacking workshop

As part of our "community" program at Collabora, I've had the chance to attend to a workshop on kernel hacking at UrLab (the ULB hackerspace). I never touched any part of the kernel and always saw it as a scary thing for hardcore hackers wearing huge beards, so this was a great opportunity to demystify the beast.

We learned about how to create and build modules, interact with userspace using the /sys pseudo-filesystem and some simple tasks with the kernel internal library (memory management, linked lists, etc). The second part was about the old school /dev system and how to implement a character device.

I also discovered lxr.free-electrons.com/ which is a great tool for browsing and find your way through a huge code base. It's definitely the kind of tool I'd consider re-using for other projects.

So a very a cool experience, I'm still not seeing submit kernel patches any time soon but may consider trying to implement a simple driver or something if I ever need to. Thanks a lot to UrLab for hosting the event, to Collabora for letting me attend and of course to Hastake who did a great job explaining all this and providing fun exercises (I had to reboot only 3 times! But yeah, next time I'll use a VM :) )

kernel-workshop.jpg Club Mate, kernel hacking and bulgur

12 November 2014

GNOME Trademark and Groupon

If you regularly read the minutes of the GNOME Foundation you noticed since few months there is a dispute about the use of the GNOME trademark by Groupon. The company has released a point of sale tablet under the name GNOME despite the trademark being registered by the GNOME Foundation. So long story short, Groupon […]

The post GNOME Trademark and Groupon appeared first on Nothing Fancy.

17 September 2014

What’s in a job title?

Over on Google+, Aaron Seigo in his inimitable way launched a discussion about  people who call themselves community managers.. In his words: “the “community manager” role that is increasingly common in the free software world is a fraud and a farce”. As you would expect when casting aspertions on people whose job is to talk to people in public, the post generated a great, and mostly constructive, discussion in the comments – I encourage you to go over there and read some of the highlights, including comments from Richard Esplin, my colleague Jan Wildeboer, Mark Shuttleworth, Michael Hall, Lenz Grimmer and other community luminaries. Well worth the read.

My humble observation here is that the community manager title is useful, but does not affect the person’s relationships with other community members.

First: what about alternative titles? Community liaison, evangelist, gardener, concierge, “cat herder”, ombudsman, Chief Community Officer, community engagement… all have been used as job titles to describe what is essentially the same role. And while I like the metaphors used for some of the titles like the gardener, I don’t think we do ourselves a service by using them. By using some terrible made-up titles, we deprive ourselves of the opportunity to let people know what we can do.

Job titles serve a number of roles in the industry: communicating your authority on a subject to people who have not worked with you (for example, in a panel or a job interview), and letting people know what you did in your job in short-hand. Now, tell me, does a “community ombudsman” rank higher than a “chief cat-herder”? Should I trust the opinion of a “Chief Community Officer” more than a “community gardener”? I can’t tell.

For better or worse, “Community manager” is widely used, and more or less understood. A community manager is someone who tries to keep existing community members happy and engaged, and grows the community by recruiting new members. The second order consequences of that can be varied: we can make our community happy by having better products, so some community managers focus a lot on technology (roadmaps, bug tracking, QA, documentation). Or you can make them happier by better communicating technology which is there – so other community managers concentrate on communication, blogging, Twitter, putting a public face on the development process. You can grow your community by recruiting new users and developers through promotion and outreach, or through business development.

While the role of a community manager is pretty well understood, it is a broad enough title to cover evangelist, product manager, marketing director, developer, release engineer and more.

Second: The job title will not matter inside your community. People in your company will give respect and authority according to who your boss is, perhaps, but people in the community will very quickly pigeon-hole you – are you doing good work and removing roadblocks, or are you a corporate mouthpiece, there to explain why unpopular decisions over which you had no control are actually good for the community? Sometimes you need to be both, but whatever you are predominantly, your community will see through it and categorize you appropriately.

What matters to me is that I am working with and in a community, working toward a vision I believe in, and enabling that community to be a nice place to work in where great things happen. Once I’m checking all those boxes, I really don’t care what my job title is, and I don’t think fellow community members and colleagues do either. My vision of community managers is that they are people who make the lives of community members (regardless of employers) a little better every day, often in ways that are invisible, and as long as you’re doing that, I don’t care what’s on your business card.

 

15 August 2014

GNOME.Asia Summit 2014

Everyone has been blogging about GUADEC, but I’d like to talk about my other favorite conference of the year, which is GNOME.Asia. This year, it was in Beijing, a mightily interesting place. Giant megapolis, with grandiose architecture, but at the same time, surprisingly easy to navigate with its efficient metro system and affordable taxis. But the air quality is as bad as they say, at least during the incredibly hot summer days where we visited.

The conference itself was great, this year, co-hosted with FUDCon’s asian edition, it was interesting to see a crowd that’s really different from those who attend GUADEC. Many more people involved in evangelising, deploying and using GNOME as opposed to just developing it, so it allows me to get a different perspective.

On a related note, I was happy to see a healthy delegation from Asia at GUADEC this year!

Sponsored by the GNOME Foundation

31 March 2014

Some introduction seems to be necessary

It appears my blog is currently is reaching some places like planet.gnome.org and planet.fedoraproject.org so I think some introduction may be necessary. My name is Baptiste Mille-Mathias, I’m French, I’m living in south of France, near Cannes with my partner Célia and my son Joshua and my daughter Soline. During work days I’m System/Application Administrator […]

The post Some introduction seems to be necessary appeared first on Nothing Fancy.

30 March 2014

Pitivi et MediaGoblin, même combat !

C'est avec pas mal de retard, que je vous signale (ou rappelle) que Pitivi, le logiciel de montage vidéo a lancé un appel aux dons par l'intermédiaire de la Fondation GNOME. Le but: aider à sortir la version 1.0 en récoltant assez de fonds pour dédier des gens à temps plein sur cette tâche, et s'il y a des sous en rab, financer des évolutions, pour lesquelles vous pouvez voter. Vous trouverez plus d'informations sur ce billet d'antistress.

D'un autre côté, MediaGoblin est un logiciel permettant d'héberger son propre équivalent de Youtube, ses photos, etc., et partager cela avec ses amis. Plus de problèmes de censure: vous hébergez le contenu vous même, et c'est ce modèle décentralisé qui a fait la force de web. MediaGoblin fait aussi un appel au don. Je ne peux que vous encourager à donner pour encourager ces projets :)

16 February 2014

OpenLibernet

I saw a link to OpenLibernet and after reading there FAQ I believed there was a fundamental problem. I quickly read the full paper but found no answer.

I guess I have missed something, please explain me :)

A peer address is the hash of a cryptographic public key. It is used to encrypt certain packets as part of the routing protocol, serve as a payment address for the payment system (similar to a Bitcoin’s wallet address), but also serves as a unique identifier for a node, similar to IP Addresses in the current internet.

Also, a node may simply generate a new Peer Address anytime it chooses to.

When the balance of a neighbor hits a certain threshold, a payment request is initiated.

Malicious nodes could however cheat their neighbors and refuse to pay them their due traffic. For that, the protocol is designed to punish such malicious behavior through ostracism. A node will be automatically isolated from the network until it pays all its dues and resolves all conflicts with its neighbors.

Turkish Cat

What is preventing some malicious node to re-join the network with a new peer address when it is getting close to receiving a payment request, and discard its balance?

The only limitation I see is First, and to eliminate the churn caused by unstable nodes, a Layer 2 link becomes active only after it has been alive for a set amount of time. but this is not a problem is you start another client in parallel when getting close to a payment threshold and switch to the new peer address when it is ready.

08 November 2013

Building a single GNOME module using jhbuild

I often see new contributors (and even seasoned hackers) wanting to hack on a GNOME module, say Empathy, trying to build it using:

jhbuild build empathy

This is obviously correct but asks jhbuild to build not only Empathy but also all its dependencies (62 modules) so you'll end up building most of the GNOME stack. While building a full GNOME stack may sometimes be useful that's generally not needed and definitely not the easiest and fastest way to proceed.

Here is what I usually do.

First I make sure to have installed all the dependencies of the module using your distribution packaging system. This is done on Fedora using:

sudo yum-builddep empathy

or on Ubuntu/Debian:

sudo apt-get build-dep empathy

If you are using a recent distribution, there are good chances that most of these dependencies are still recent enough to build the module you want to hack on. Of course, as you are trying to build the master branch of the project some dependencies may have been bumped to one of the latest developement releases. But first let's try to build just Empathy.

jhbuild buildone empathy

There are good chances that some dependencies are missing or are too old, you'll then see this kind of error message:

No package 'libsecret-1' found
Requested 'telepathy-glib >= 0.19.9' but version of Telepathy-GLib is 0.18.2

That means you'll have to build these two libraries in your jhbuild as well. Just check the list of depencies of the module to find the exact name of the module:

jhbuild list empathy | grep secret
jhbuild list empathy | grep telepathy

In this example you'll see you have to build the libsecret and telepathy-glib modules:

jhbuild buildone libsecret telepathy-glib

Of course these modules may have some extra depencies on their own so you may have to do some iteration of this process before being able to actually build the module you care about. But, from my experience, if you are using a recent distribution (like the latest Fedora release) the whole process will still be much faster than building the full stack. Furthermore, it will save you to have to deal with build errors from potentially 62 modules.

03 August 2013

GNOME.Asia 2013

This June, I was in Seoul, Korea for the GNOME.Asia Summit, the yearly occasion to meet up with the Asian side of the GNOME community. As always, it was an awesome conference, with so many cool people. I learned about new projects like Seafile and got to meet new friends and catch up with old ones.

I’d also to thank my employer, Collabora, for sponsoring my flight and the GNOME foundation for paying the hotel.

Sponsored by Collabora                              Sponsored by the GNOME Foundation

25 March 2013

SPICE on OSX, take 2

A while back, I made a Vinagre build for OSX. However, reproducing this build needed lots of manual tweaking, the build was not working on newer OSX versions, and in the mean time, the recommended SPICE client became remote-viewer. In short, this work was obsolete.

I've recently looked again at this, but this time with the goal of documenting the build process, and making the build as easy as possible to reproduce. This is once again based off gtk-osx, with an additional moduleset containing the SPICE modules, and a script to download/install most of what is needed. I've also switched to building remote-viewer instead of vinagre

This time, I've documented all of this work, but all you should have to do to build remote-viewer for OSX is to run a script, copy a configuration file to the right place, and then run a usual jhbuild build. Read the documentation for more detailed information about how to do an OSX build.

I've uploaded a binary built using these instructions, but it's lacking some features (USB redirection comes to mind), and it's slow, etc, etc, so .... patches welcome! ;) Feel free to contact me if you are interested in making OSX builds and need help getting started, have build issues, ...

11 December 2012

FOSDEM 2013 Crossdesktop devroom Call for talks

The Call for talks for the Crossdesktop devroom at FOSDEM 2013 is getting to its end this Friday. Don't wait and submit your talk proposal about your favourite part of GNOME now!

Proposals should be sent to the crossdesktop devroom mailing list (you don't have to subscribe).

29 November 2012

29 Nov 2012

Wandering in embedded land: part 2, Arduino turned remote control

Now with the Midea AC remote control being mostly deciphered,
the next step is to emulate the remote, with an arduino since it's
the system I use for that embedded greenhouse control. While waiting
for my mail ordered IR LED (I didn't want to solder off one from my
existing AC controllers), I started doing a bit of code and looking
at the integration problems.

The hardware side

One of the challenge is that the Arduino system is already heavy
packed, basically I use all the digital Input/Output except 5 (and 0 and
1 which are hooked to the serial support), and 2 of the 6 analog inputs,
as the card already drives 2 SHT1x temp/humidity sensors, 2 light sensors,
an home made 8 way relay board, and a small LCD display, there isn't much
room left physically or in memory for more wires or code ! Fortunately
driving a LED requires minimal resources, the schematic is trivial:

I actually used a 220 Ohms resistance since I didn't had a 100 Ohms one,
the only effect is how far the signal may be received, really not a problem
in my case. Also I initially hooked it on pin 5 which shouldn't had been
a problem, and that's the free slot I have available on the Arduino

The software side

My thinking was: well I just need to recreate the same set of light
patterns to emulate the remote control and that's done, sounds fairly simple
and I started coding royines which would switch the led on or off for
1T, 3T and 4T durations. Thus the core of the code was like:


void emit_midea_start(void) {
ir_down(T_8);
ir_up(T_8);
}
void emit_midea_end(void) {
ir_down(T_1);
ir_up(T_8);
}
void emit_midea_byte(byte b) {
int i;
byte cur = b;

for (i = 0;i < 8;i++) {
ir_down(T_1);
if (cur & 1)
ir_up(T_3);
else
ir_up(T_1);
cur >>= 1;
}
cur = ~b;
for (i = 0;i < 8;i++) {
ir_down(T_1);
if (cur & 1)
ir_up(T_3);
else
ir_up(T_1);
cur >>= 1;
}
}

where ir_up() and ir_down() were respectively activating or deactivating
the pin 5 set as OUTPUT for the given duration defined as macros.

Playing with 2 arduinos simultaneously

Of course to test my code the simplest was to set up the new module on
another arduino positioned in front of the Arduino with the IR receptor
and running the same code as used for decoding the protocol.

The nice thing is that you can hook up the arduinos on 2 different USB
cables connected to the same machine, they will report as ttyUSB0 and ttyUSB1
and once you have looked at the serial output you can find which is which.
The only cumbersome part is having to select the serial port to the other one
when you want to switch box either to monitor the output or to upload a new
ersion of the code, so far things are rather easy.

Except it just didn't worked !!!

Not the arduino, I actually replaced the IR LED by a normal one from
time to time to verify it was firing for a fraction of a second when
emitting the sequence, no the problem was that the IR receiver was detecting
transitions but none of the expected duration, or order, nothing I could
really consider a mapping of what my code was sending. So I tweaked
the emitting code over and over rewriting the timing routines in 3
different ways, trying to disable interrupts, etc... Nothing worked!

Clearly there was something I hadn't understood ... and I started
searching on google and reading, first about timing issues on the Arduino
but things ought to be correct there, and then on existing remote control
code for Arduino and others. Then I hit
Ken Shirriff's blog on his IR library for the Arduino and realized
that the IR LED and the IR Receiver don't operate at the same level. The
LED really can just be switched on or off, but the IR Receiver is calibrated
for a given frequency (38 KHz in this case) and will not report if it
gets the IR light, but report if it gets the 38 KHz pulse carried by
the IR light. In a nutshell the IR receiver was decoding my analogic 0's
but didn't for the 1's because it was failing to catch a 38 KHz pulse,
I was switching the IR led permanently on and that was not recognized as
a 1 and generating erroneous transitions.

Emitting the 38KHz pulse

Ken Shirriff has another great article titled
Secrets of Arduino PWM explaining the details used to generate a pulse automatically
on *selected* Arduino digital output ans explains the details used to set this
up. This is rather complex and nicely encapsulated in his infrared library
code, but I would suggest to have a look if you're starting advanced
developments on the Arduino.

The simplest is then to use Ken's
IRremote library
by first installing it into the installed arduino environment:

  • create a new directory /usr/share/arduino/libraries/IRremote (as root)
  • copy IRremote.cpp IRremote.hIRremote.h IRremoteInt.h there

and then use it in the midea_ir.ino program:


#include <IRremote.h>

IRsend irsend;

int IRpin = 3;

This includes the library in the resulting program, define an IRsend
object that we will use to drive the IR led. One thing to note is that
by default the IRremote library drives only the digital pin 3, you can
modify it to change to a couple of other pins, but it is not possible to
drive the PWM for digital pin 5 which is the one not used currently on
my greenhouse Arduino.

Then the idea is to just replace the ir_down() and ir_up() in the code
with the equivalent low level entry points driving the LED in the IRsend
object, first by using irsend.enableIROut(38) to enable the pulse at
38 KHz on the default pin (Digital 3) and then use irsend.mark(usec)
for the equivalent ir_down() and irsend.space(usec) for the ir_up():


void emit_midea_start(void) {
irsend.enableIROut(38);
irsend.mark(4200);
irsend.space(4500);
}

void emit_midea_end(void) {
irsend.mark(550);
irsend.space(4500);
}
void emit_midea_byte(byte b) {
int i;
byte cur = b;
byte mask = 0x80;

for (i = 0;i < 8;i++) {
irsend.mark(450);
if (cur & mask)
irsend.space(1700);
else
irsend.space(600);
mask >> 1;
}

...

Checking with a normal led allowed to spot a brief light when emitting
the frame so it was basically looking okay...

And this worked, placing the emitting arduino in front of the receiving
the IRanalyzer started to decode the frames, as with the real remote control,
things were looking good again !

But failed the real test ... when put in from of the AC the hardware didn't
react, some improvement is still needed.

Check your timings, theory vs. practice

I suspected some timing issue, not with the 38KHz pulse as the code from
Ken was working fine for an array of devices, but rather how my code was
emitting, another precious hint was found in the blog about the library:


IR sensors typically cause the mark to be measured as longer than expected and the space to be shorter than expected. The code extends marks by 100us to account for this (the value MARK_EXCESS). You may need to tweak the expected values or tolerances in this case.

remember that the receptor does some logic on the input to detect the
pulse at 38 KHz, that means that while a logic 0 can be detected relatively
quickly, it will take at least a few beats before the sync to the pulse is
recognized and the receiver switch its output to a logic 1. In a nutshell
a 1 T low duration takes less time to recognize than an 1 T high duration.
I was also afraid that the overall time to send a full frame would drift
over the fixed limit needed to transmit it.

So I tweaked the emitting code to count the actual overall duration of
the frames, and aslo added to the receiver decoding code the display of the
duration of the first 10 durations between transitions. I then reran the
receiver looking at the same input from the real remote control and the
arduino emulation, and found that in average:

  • the emulated 8T down was 200us too long
  • the emulated 8T up was 100us too short
  • the emulated 1T down at the beginning of a bit was 100us too long
  • the emulated 1T up at the end of logical 0 was 80us too short
  • the emulated 3T up at the end of logical 1 was 50us too short

After tweaking the duration accordingly in the emitter code, I got
my first successful emulated command to the AC, properly switching it off,
SUCCESS !!!

I then finished the code to provide the weird temperature conversions
front end routines and then glue that as a test application looping over
a minute:

  • switching to cooling to 23C for 15s
  • switching to heating to 26C for 15s
  • switching the AC off for 30s

The midea_ir_v1.ino code
is available for download, analysis and reuse. I would suggest to not
let this run for long in fron of an AC as the very frequent change of mode
may not be good for the hardware (nor for the electricity bill !).

is available for download

Generating the 38KHz pulse in software

While the PWM generation has a number of advantages, especially w.r.t.
regularity of the pattern and no risk of drift due for example to delays
handling interrupts, in my case it has the serious drawback of forcing use
of a given pin (3 by default, or 9 if switching to a different timer in the
IRremote code), and those are not available, unless getting the soldering
iron and changing some of the existing routing in my add-on board. So
the next step is to also implement the 38KHz pulse in software. First
this should only affect the up phase, the down phase consist of no emission
and hence implemented by a simple:


void send_space(int us) {
digitalWrite(IRpin, LOW);
delayMicroseconds(us);
}

The up part should divised into HIGH for most of the duration, followed
by a small LOW indicating the pulse. 38 KHz means a 26.316 microseconds
period. Since the delayMicroseconds() of the arduino indicates it can be
reliable only for more than 3 microseconds, it seems reasonable to use
a 22us HIGH/ 4us LOW split, and expect the remaining computation to fill
the sub-microsecond of the period, that ought to be accurate enough. One
of the point of the code below is to try to avoid excessive drift in two
ways:

  • by doing the accounting over the total lenght for the up period,
    not trying to just stack 21 periods
  • by running a busy loop when the delay left is minimal rather than
    call delayMicroseconds() for a too small amount (not sure it's effective
    micros() value seems periodically updated by an timer interrupt handler
    doesn't look like the chip provides a fine grained counter).

The resulting code doesn't look very nice:


void send_mark(int us) {
unsigned long e, t = micros();
e = t + us;
while (t < e) {
digitalWrite(IRpin, HIGH);
if (t - e < 4) {
while ((t = micros()) < e);
digitalWrite(IRpin, LOW);
break;
}
if (t - e < 22) {
delayMicroseconds(t-e);
digitalWrite(IRpin, LOW);
break;
}
delayMicroseconds(22);
digitalWrite(IRpin, LOW);
t = micros();
if (t - e < 4) {
while ((t = micros()) < e);
break;
}
delayMicroseconds(4);
t = micros();
}
}

But to my surprize once I replaced all the irsend.mark() and irsend.space()
by equivalent calls to send_mark() and send_space(), the IRanalyzer running on
the second arduino properly understood the sequence, proving that the IR
receiver properly picked the signal, yay !

Of course that didn't worked out the first time on the real hardware,
after a bit of analysis of the resulting timings exposed by IRanalyzer
I noticed the mark at the beginning of bits were all nearly 100us too long,
I switched the generation from 450us to 350us, and bingo, that worked with the
real aircon !

the midea_ir_v2.ino resulting module
is very specific code, but it is tiny, less than 200
lines, and he hardware side is also really minimal, a single resistor and
the IR led.

Epilogue

The code is now plugged and working, but v2 just could not work in the
real environment with all the other sensors and communication going on.
I suspect that the amount of foreign interrupts are breaking the 38KHz
pulse generation, switching back to the PWM generated pulse using the
IRremote library works in a very reliable way. So I had to unsolder pin 3
and reaffect it to the IR led, but that was a small price to pay in
comparison of trying to debug the timing issues in situ !

The next step in the embedded work will be to replace the aging NSLU2
driving the arduino by a shiny new Raspberry Pi !

This entry will be kept at http://veillard.com/embedded/midea.html.

26 November 2012

26 Nov 2012

Wandering in embedded land: part 1, Midea 美的 aircon protocol

I have been a user of Arduino's for a few years now, I use them to
control my greenhouse (I grow orchids). This mean collecting data
for various parameters (temperature, hygrometry, light) and actionning
a collection of devices in reaction (fan, misting pump, fogging machine,
a heater). The control part is actually done by an NSLU2 which also
collects the data, export them as graph on the internet and allows me
to manually jump in and take action if needed even if I'm far away
using an ssh connection.

This setup has been working well for me for a few years but since our
move to China I have had an airon installed in the greenhouse like in
other parts of the home. And that's where I have a problem, this AC of
brand Midea (very common home appliance brand in China) can only be
controlled though a remote control. And until now that meant I had no
way to automtate heating or cooling, which is perfectly unreasonnable :-)

After some googling the most useful reference I found about those
is the Tom's
Site page on building a remote adapter
for those. It explained most
parts of the protocol but not all of them, basically he stopped at the
core of the interface but didn't went into details, for example didn't
explained the commands encoding. The 3 things I really need are:

  • Start cooling to a given temperature
  • Start heating to a given temperature
  • Stop the AC

I don't really need full fan speed control, low speed is quite sufficient
for the greenhouse.

Restarting the Arduino development

I hadn't touched the Arduino development environment for the last few
years, and I remember it being a bit painful to set up at the time. With
Fedora 17, things have changed, a simple

yum install arduino

and launching the arduino tool worked the first time, actually it asked me
the permission to tweak groups to allow me as the current user to talk
through the USB serial line to the Arduino. Once done, and logging in again
everything worked perfectly, congratulation to the packagers, well done !
The only sowtware annoyance is that is often take a dozen seconds between the
time an arduino is connnected or powered and when it appears in the
ttyUSB? serial ports options in the UI, but that's probably not arduino's
fault.

The arduino environment didn't really change in all those years,
the two notable exception is the very long list of different boards supoorted
now, and the fact that arduino code files are renamed from .pde to .ino !

Learning about the data emitted

The first thing needed was to double check the result from Tom with
our own hardware, then learn about the protocol to be able to construct
the commands above. To do this I hooked a IR receptor to the Arduino on
digital pin 3, the graphic below show the logic, it's very simple:

Then I loaded a modified (for IRpin 3) version of Walter Anderson's
IRanalyzer.pde
onto the Arduino and started firing the aircon remote control at the
receiver and looked at the result: total garbage ! Whatever the key
pressed the output had no structure and actually looked as random as
input without any key being pressed :-\

It took me a couple of hours of tweaking to find out that the metal
enclosure of the receiver had to be grounded too, the GRD pin wasn't
connected, and not doing so led to random result !

Once that fixed, the data read by the Arduino started to make some
sense and it was looking like the protocol was indeed the same as the
one described in Tom's site.

The key to the understanding of how the remote work is that it
encodes a digital input (3 Bytes for Midea AC protocol) as a set of
0 and 1 patterns each of them being defined by a 0 analogic duration
followed by a short anlogic pulse at 38KHz to encode 0, or a long
analogic pulse at 38KHz to encode 1:

Each T delay correspond to 21 pulses on a 38KHz signal, this is
then a variable lenght encoding

As I was making progresses on the recognition of the patterns sent
by the aircon I was modifying the program to give a more synthetic view
of the resulting received frames, you can use my own
IRanalyzer.ino
it is exended to allow recording of a variable number of transition,
detects the start transition as a 3-4 ms up, and the end as a 3-4 ms
down from the emitter, then show the transmitted data as bit field and
hexadecimal bytes:


Waiting...
Bit stream detected: 102 transitions
D U 1011 0010 0100 1101 1001 1111 0110 0000 1011 0000 0100 1111 dUD Bit stream end : B2 4D 9F 60 B0 4F !
4484 4324 608 1572 604 472 596 1580 600 1580 !
Waiting...

So basically what we find here:

  • the frame start markers 4T down, 4T up
  • 6 Bytes of payload, this is actually 3 bytes of data but after
    each byte is sent its complement is sent too
  • the end of the frame consist of 1T down, 4T up and then 4T down

there is a few interesting things to note about this encoding:

  • It is redundant allowing to detect errors or stray data coming from
    other 38KHz remotes (which are really common !)
  • All frames are actually sent a second time just after the first one,
    so the amount of redundancy is around 4 to 1 in the end !
  • by reemitting inverted values, the anmount of 0 and 1 sent is the
    same, as a result a frame always have a constant duration even if the
    encoding uses variable lenght
  • A double frame duration is around : 2 * (8 + 8 + 3*2*8 + 3*4*8 + 1 + 8) * 21 / 38000 ~= 186 ms

The protocol decoding

Once the frame is being decoded properly, we are down to analyzing
only 3 bytes of input per command. So I started pressing the buttons
in various ways and record the emitted sequences:


Cool 24 fan level 3
1011 0010 0100 1101 0011 1111 1100 0000 0100 0000 1011 1111 B2 4D 3F C0 40 BF
Cool 24 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0100 0000 1011 1111 B2 4D 9F 60 40 BF
Cool 20 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0010 0000 1101 1111 B2 4D 9F 60 20 DF
Cool 19 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0011 0000 1100 1111 B2 4D 9F 60 30 CF
Heat 18 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0001 1100 1110 0011 B2 4D 9F 60 1C E3
Heat 17 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 0000 1100 1111 0011 B2 4D 9F 60 0C F3
Heat 29 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 1010 1100 0101 0011 B2 4D 9F 60 AC 53
Heat 30 fan level 1
1011 0010 0100 1101 1001 1111 0110 0000 1011 1100 0100 0011 B2 4D 9F 60 BC 43
Stop Heat 30 fan level 1
1011 0010 0100 1101 0111 1011 1000 0100 1110 0000 0001 1111 B2 4D 7B 84 E0 1F
Cool 28 fan 1
1011 0010 0100 1101 1001 1111 0110 0000 1000 0000 0111 1111 B2 4D 9F 60 80 7F
Stop Cool 28 fan 1
1011 0010 0100 1101 0111 1011 1000 0100 1110 0000 0001 1111 B2 4D 7B 84 E0 1F

The immediately obvious information is that the first byte is the constant
0xB2 as noted by Tom's Site. Another thing one can guess is that the command
drom the control is (in general) absolute, not relative to the current state
of the AC, so commands are idempotent,if it failed to catch one key, it will
get a correct state if this is repeated, this just makes sense from an UI
point of view ! After a bit of analysis and further testing the
code for the 3 bytes seems to be:

[1011 0010] [ffff 1111] [ttttcccc]

Where tttt == temperature in Celcius encoded as following:

17: 0000, 18: 0001, 19: 0011, 20: 0010, 21: 0110,
22: 0111, 23: 0101, 24: 0100, 25: 1100, 26: 1101,
27: 1001, 28: 1000, 29: 1010, 30: 1011, off: 1110

I fail to see any logic in the encoding there, I dunno what the Midea
guys were thinking when picking those values. What sucks is that the protocol
seems to have a hardcoded range 17-30, while basically for the orchids
I try to keep in the range 15-35, i.e. I will have to play with the
sensors output to do the detection. Moreover my test is that even when
asked to keep warm at 17, the AC will continue to heat until well above
19C, I can't trust it to be accurate, best is to keep the control and logic
on our side !

cccc == command, 0000 to cool, 1100 to heat, 1000 for automatic selection
and 1101 for the mode to remove moisture

Lastly ffff seems to be the fan control, 1001 for low speed, 0101 for
medium speed, 0011 for high speed, 1011 automatic, and 1110 for off. There
is also a mode which is about minimizing energy, useful at night, where
the fan is quite slower than even the low speed, but i didn't yet understood
how that actually work.

There is still 4 bytes left undeciphered, they could be related to 2
function that I don't use: a timer and oscilation of the air flow, I
didn't try to dig, especially with a remote control and documentation
in Chinese !

Last but not least: the stop command is 0xB2 0x7B 0xE0, it's the same
whatever the current state might be.

At this point I was relatively confident I would be able to control the
AC from an Arduino, using a relatively simple IR LED control, it ought to
be a "simple matter of programming", right ?

Well, that will be the topic of the next part ;-) !

This entry will be kept at http://veillard.com/embedded/midea.html.

19 February 2012

Tip for python-mode with Emacs

If you expect 'Alt + d' wil only remove the first part 'foo_' of 'foo_bar' with the great python-mode, you can make this change to python-mode.el:

- (modify-syntax-entry ?\_ "w" py-mode-syntax-table)
+ (modify-syntax-entry ?\_ "_" py-mode-syntax-table)

Thank you Ivan.

Update with python-mode v6.0.4, add this line to python-mode-syntax-table (line 153):

(modify-syntax-entry ?\_ "_" table)

27 January 2012

27 Jan 2012

FOSDEM 2012

My last FOSDEM participation was in 2004, and I always keep in mind many good moments with my French and Belgian GNOME's Friends !

Archives

So I'm totally excited to meet them again in 2012 ... :)

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

15 November 2011

Outreach in GNOME

The GNOME Montréal Summit was held a month ago now, and not only was it lots of fun, but also a very productive time. Marina held a session about the outreach in GNOME, and we spent time discussing different ways to improve welcoming and attracting people in GNOME. Let me share some of the points we raised, supplemented by my own personnal opinions, that do not reflect those of my employer, when I’ll have a job.

A warm welcome

There has been a lot of nice work done structuring and cleaning up GNOME Love page. We now have a list of people newcomers can contact if they are interested in a particular project. Feel free to add your name and project to the list, the more entry points we get, the better for them!

I tend to think there is still a bit too much content on the GNOME Love page, maybe we could use more pretty diagrams (platform overview, ways to get involved) to keep the excitement growing and to reduce the amount of text we have right now (GUI tutorials, books about GNOME,  tips & tricks). Feedback appreciated!

Start small

We tend to think of contributions as patches and a certain amount of code added in a project. Howeverit’s not easy at all for newcomers to just pop in and work on a patch, especially in GNOME where most software follows strict rules (as in coding style, GObject API style, etc.). And since GNOME maintains (again, for the most part) a very high quality in its code backed by many hackers, whether they’re part of a company or independent contributors, it makes the landing of a patch even tougher.

Which is why we should encourage everyone who wants to get involved to work on small tasks, would it be fixing a string typo, rewording or marking plural forms for translation. Working on manageable changes ensures that the patches are completed and landing these patches builds confidence to work on bigger patches. Having your name in the commit log is a great reward, that encourages sticking around and digging for more.

Advertise early, advertise often

If we want to get loads of people coming toward GNOME, we should definitely talk more and spread the word about the GNOME Outreach Program for Women (GOPW) and the Google Summer of Code (GSoC) earlier.

Google doesn’t announce the program very far in advance and approved organizations are only published three weeks before the application deadline, but we should encourage students to get involved in GNOME early and keep an eye out for such announcements. Having a list of mentors who can help newcomers anytime throughout the year and having that list included on the Google Summer of Code wiki page of organizations that provide year-round informal mentorship should help attract students to GNOME.

On our side, we could definitely gather ideas and promote the programs earlier. Don’t have exact dates in head, but our KDE fellows promote the Summer of Code early March if not before. Not only will that help better spreading the word, but students might get involved earlier, and get to know the tools/community before the actual program.

Communication is key to success

We have to get better at communicating with interns, and make sure they get the help and feedback they need. We have different channels of communication in GNOME, mainly IRC and mailing lists. Both are a bit intimidating to the newcomer (I still proceed with extreme care when I use them), so it would be good to have a short tutorial about the main mailing lists around, how to connect on IRC and what to expect out of it.

Always two there are, no more, no less

In order to increase the chances of success for the interns, we need good mentors. Most people underestimate what it takes to be a good mentor: being nice, supportive, competent, enthusiastic. You have to remember you’re helping someone to land in the big GNOME land without too much hassle, so consider it carefully. I encourage you to read this very informative blog post if you’re thinking about mentoring a student.

The Summer of Code administrators at GNOME could perhaps keep an eye on mentors as well as students, not with weekly reports but just by poking them from time to time and making sure everything is going well.

Show me the way

To help students set up their workflow, it would be great to have full-length screen-casts demonstrating how to fix a bug in GNOME, starting on the Bugzilla page and finishing on the same page when attaching the final patch. This means going through cloning the module with Git, using grep to find the faulty line, editing the code, using Git to look at the diff and format the final patch. All this in one video would really help connect the parts and suggest a way to work for students.

GNOME Love bug drive

Please consider attaching the gnome-love keyword when you file or triage a bug that is easy to fix. A selection of current GNOME Love bugs is essential to help newcomers figure out how they can start contributing.

Good GNOME Love bugs are trivial or straight forward bugs that everyone agrees on, e.g. paper cut bugs or corner cases. It’s helpful to specify the file or files that will need to be modified and any reference code that does something similar in the bug. Even most trivial bugs are suitable candidates, because in the end, fixing a GNOME Love bug is as much about learning the process, as about the fix itself!

Get involved

If you want to help us gather more people around GNOME and help them find their spot in our community, make sure to suscribe to the outreach-list mailing list.

Thanks for reading!

And thanks to Marina and Karen for reviewing this post!

09 October 2011

Feedback on GNOME 3.0

After 5 months with GNOME 3.0, I'm really happy with the experience. At the end of work day,
my mind is no more exhausted of windows placement fighting and application finding.

GNOME 3.0 is really stable, except with the Open Source driver on my Radeon 5870 (4 crashes in 2 months).

I really like the behavior of dual-head where the secondary screen has only one virtual screen.
For me, there are just 3 annoying points:

  • Ctrl + Del to remove a file in Nautilus, may be it's a Fedora settings but this change is just @!# I've already a Trash to undo my mistakes (http://www.khattam.info/howto-enable-delete-key-in-nautilus-3-fedora-15-2011-06-01.html)
  • Alt key to shutdown, no I don't want to waste energy for days and my PC boots quickly.
  • only vertical virtual screens, I found a bit painful to move down two screens when the screen is reachable with one move with a 2x2 layout but I understand this layout doesn't fit well with the GNOME 3 design.

To have a good experience with GNOME 3, I use:

  • Windows key + type to launch everything
  • Ctrl + Shift + Alt + arrows to move the application between the virtual screen
  • Ctrl + click in the launcher when I really want a new instance (the default behavior is perfect)
  • snap à la Windows 7 is great
  • Alt + Tab then the arrow keys to select an app

Don't forget to read https://live.gnome.org/GnomeShell/CheatSheet or the Help (System key + 'Help').

It's not specific to GNOME 3 but you can change the volume when your mouse is over the applet (don't click, think hover) and a mouse scroll.
With GTK+, do you know you can reach the end of scrolled area with a left click on the arrow and a specific position by middle click?

I'm impressed by the new features of GNOME 3.2 and I'm waiting for Fedora 17 to enjoy it!

23 August 2011

GNU Hackers Meeting 2011 in Paris

In case you are in the Paris area and don't know already, there is a a

GNU Hackers Meeting event being held from Thursday 25th to Sunday 28th

August, 2011 at IRILL If you are a GNU user, enthusiast, or

contributor of any kind, feel free to come. I guess you can still

drop an email to ghm-registration@gnu.org.

For folks around on Wednesday (yeah, that's tomorrow), we are having a

dinner around 8 PM at the Mussuwam, a Senegalese restaurant in Paris, near Place

d'Italie. When you get there, just give them the secret password

(which is 'GNU') and they'll show you were the rest of the crowd sits.

Be sure to keep that password secret though. No one else should be in

the know.

Happy hacking and I hope to see you guys there.

04 July 2011

Going to RMLL (LSM) and Debconf!

Next week, I’ll head to Strasbourg for Rencontres Mondiales du Logiciel Libre 2011. On monday morning, I’ll be giving my Debian Packaging Tutorial for the second time. Let’s hope it goes well and I can recruit some future DDs!

Then, at the end of July, I’ll attend Debconf again. Unfortunately, I won’t be able to participate in Debcamp this year, but I look forward to a full week of talks and exciting discussions. There, I’ll be chairing two sessions about Ruby in Debian and Quality Assurance.

17 February 2011

Recent Libgda evolutions

It’s been a long time since I blogged about Libgda (and for the matter since I blogged at all!). Here is a quick outline on what has been going on regarding Libgda for the past few months:

  • Libgda’s latest version is now 4.2.4
  • many bugs have been corrected and it’s now very stable
  • the documentation is now faily exhaustive and includes a lot of examples
  • a GTK3 branch is maintained, it contains all the modifications to make Libgda work in the GTK3 environment
  • the GdaBrowser and GdaSql tools have had a lot of work and are now both mature and stable
  • using the NSIS tool, I’ve made available a new Windows installer for the GdaBrowser and associated tools, available at http://www.gnome.org/~vivien/GdaBrowserSetup.exe. It’s only available in English and French, please test it and report any error.

In the next months, I’ll work on polishing even more the GdaBrowser tool which I use on a daily basis (and of course correct bugs).

21 March 2010

16 March 2010

Webkit fun, maths and an ebook reader

I have been toying with webkit lately, and even managed to do some pretty things with it. As a consequence, I haven’t worked that much on ekiga, but perhaps some of my experiments will turn into something interesting there. I have an experimental branch with a less than fifty lines patch… I’m still trying to find a way to do more with less code : I want to do as little GObject-inheritance as possible!

That little programming was done while studying class field theory, which is pretty nice on the high-level principles and somewhat awful on the more technical aspects. I also read again some old articles on modular forms, but I can’t say that was “studying” : since it was one of the main objects of my Ph.D, that came back pretty smoothly…

I found a few minutes to enter a brick-and-mortar shop and have a look at the ebook readers on display. There was only *one* of them : the sony PRS-600. I was pretty unimpressed : the display was too dark (because it was a touch screen?), but that wasn’t the worse deal breaker. I inserted an SD card where I had put a sample of the type of documents I read : they showed up as a flat list (pain #1), and not all of them (no djvu) (pain #2) and finally, one of them showed up too small… and ended up fully unreadable when I tried to zoom (pain #3). I guess that settles the question I had on whether my next techno-tool would be a netbook or an ebook reader… That probably means I’ll look more seriously into fixing the last bug I reported on evince (internal bookmarks in documents).

24 February 2010

Renouveau dans ma vie professionnelle

Bonjour à tous,

je vous délaisse depuis quelques temps. Est-ce le temps qui fait cela, une période dans ma vie ou simplement autre chose, je n'en ai pas la moindre idée.

Je tenais juste à vous annoncer que je vais quitter mon employeur actuel qui est un Agence Gouvernementale pour chercher de l'expérience dans le secteur privé. En effet, je suis de plus en plus déçu par l'Administration.

Depuis quelques années, comme vous le savez, je me passionne pour la sécurité de l'Information. Ceci ajouté à une formation en Management de la Sécurité de l'Information, j'ai l'ambition de faire valoir mes expériences auprès d'un employeur (à définir) qui pourrait me permettre de les améliorer tout en lui faisant bénéficier de mes compétences.

Si vous avez de bonnes adresses, je suis preneur évidemment. ^^

16 January 2010

New Libgda releases

With the beginning of the year comes new releases of Libgda:

  • version 4.0.6 which contains corrections for the stable branch
  • version 4.1.4, a beta version for the upcoming 4.2 version

The 4.1.4′s API is now considered stable and except for minor corrections should not be modified anymore.

This new version also includes a new database adaptator (provider) to connect to databases through a web server (which of course needs to be configured for that purpose) as illustrated by the followin diagram:

WebProvider usage

The database being accessed by the web server can be any type supported by the PEAR::MDB2 module.

The GdaBrowser application now supports defining presentation preferences for each table’s column, which are used when data from a table’s column need to be displayed:
GdaBrowser table column's preferences
The UI extension now supports improved custom layout, described through a simple XML syntax, as shown in the following screenshot of the gdaui-demo-4.0 program:

Form custom layout

For more information, please visit the http://www.gnome-db.org web site.

08 January 2010

Attending XMPP Summit and FOSDEM, 5th-8th of February in Brussels

I'm going to FOSDEM, the Free and Open Source Software Developers' European MeetingFor the third year in a row, I’ll be flying to Brussels, Belgium next month to attend the XMPP Summit/FOSDEM combo. I didn’t look through the FOSDEM schedule yet but when it comes to XMPP, I’m looking forward to some discussions on Jingle Nodes and Publish-Subscribe. I’ve been working more and more with XMPP in the past months, especially hacking on ejabberd, and attending is a good motivation to get some of my Jingle Nodes related code shaped up on time. See you there!


30 December 2009

Rappel - Définition du Hacker

Le hacker est un passionné d'informatique, souvent très doué, dont les seuls objectifs sont de "bricoler" programmes et matériels (software et hardware) afin d'obtenir des résultats de qualité pour lui-même, pour l'évolution des technologies et pour la reconnaissance de ses pairs.

Les conventions de hackers sont des rassemblements où ces férus d'informatique se rencontrent, discutent et comparent leurs travaux.

Depuis de nombreuses années, la tendance est de confondre à tort le hacker avec le cracker, dont les buts ne sont pas toujours légaux.

Or, on ne le répétera jamais assez, les objectifs du hacker sont louables et contribuent de manière active aux progrès informatiques et aux outils que nous utilisons quotidiennement.

05 November 2009

Attracted to FLT

I have been a little stuck for some weeks : a new year started (no, that post hasn’t been stuck since january — scholar year start in september) and I have students to tend to. As I have the habit to say : good students bring work because you have to push them high, and bad students bring work because you have to push them from low! Either way, it has been keeping me pretty busy.

Still, I found the time to read some more maths, but got lost on something quite unrelated to my main objective : I just read about number theory and the ideas behind the proof of Fermat’s Last Theorem (Taylor and Wiles’ theorem now). That was supposed to be my second target! Oh, well, I’ll just try to hit my first target now (Deligne’s proof of the Weil conjectures). And then go back to FLT for a new and deeper reading.

I only played a little with ekiga’s code — mostly removing dead code. Not much : low motivation.

15 October 2009

gwt-strophe 0.1.0 released

I just released the first version of gwt-strophe, GWT bindings for the Strophe XMPP library. Nothing much to say else than it is pretty young, with all that can imply. The project is hosted at https://launchpad.net/gwt-strophe


11 July 2009

Slides from RMLL (and much more)

So, I’m back from the Rencontres Mondiales du Logiciel Libre, which took place in Nantes this year. It was great to see all those people from the french Free Software community again, and I look forward to seeing them again next year in Bordeaux (too bad the Toulouse bid wasn’t chosen).

The Debian booth, mainly organized by Xavier Oswald and Aurélien Couderc, with help from Raphaël, Roland and others (but not me!), got a lot of visits, and Debian’s popularity is high in the community (probably because RMLL is mostly for über-geeks, and Debian’s market share is still very high in this sub-community).

I spent quite a lot of time with the Ubuntu-FR crew, which I hadn’t met before. They do an awesome work on getting new people to use Linux (providing great docs and support), and do very well (much better than in the past) at giving a good global picture of the Free Software world (Linux != Ubuntu, other projects do exist and play a very large role in Ubuntu’s success, etc). It’s great to see Free Software’s promotion in France being in such good hands. (Full disclosure: I got a free mug (recycled plastic) with my Ubuntu-FR T-shirt, which might affect my judgement).

I gave two talks, on two topics I wanted to talk about for some time. First one was about the interactions between users, distributions and upstream projects, with a focus on Ubuntu’s development model and relationships with Debian and upstream projects. Second one was about voting methods, and Condorcet in particular. If you attended one of those talks, feedback (good or bad) is welcomed (either in comments or by mail). Slides are also available (in french):

On a more general note, I still don’t understand why the “Mondiales” in RMLL’s title isn’t being dropped or replaced by “Francophones“. Seeing the organization congratulate themselves because 30% of the talks were in english was quite funny, since in most cases, the english part of the talk was “Is there someone not understanding french? no? OK, let’s go on in french.“, and all the announcements were made in french only. Seriously, RMLL is a great (probably the best) french-speaking community event. But it’s not FOSDEM: different goals, different people. Instead of trying (and failing) to make it an international event, it would be much better to focus on making it a better french-speaking event, for example by getting more french-speaking developers to come and talk (you see at least 5 times more french-speaking developers in FOSDEM than in RMLL).

I’m now back in Lyon for two days, before leaving to Montreal Linux Symposium, then coming back to Lyon for three days, then Debconf from 23rd to 31st, and then moving to Nancy, where I will start as an assistant professor in september (a permanent (tenured) position).

26 February 2009

fatal: protocol error: expected sha/ref

Dear Lennart,

You should probably know that typing the correct URL would work better for cloning a bzr branch (yes a branch, not a repository).

This is what I get when I try to feed git a random invalid URL:

$ git clone git://github.com/idontexist
Initialized empty Git repository in /home/asabil/Desktop/idontexist/.git/
fatal: protocol error: expected sha/ref, got ‘
*********’

No matching repositories found.

*********’

Now is probably the time to stop this non constructive “my DVCS is better than yours”, and focus on writing code and fixing bugs.


19 November 2008

19 Nov 2008

WOW ... Four fucking years without blogging in my advogado's page. I needed times to put my head and my body in the right place. Four years of doubt, sadness and Happiness as well. So since a few days, I decided to blog again.

It's all for the moment :)

22 July 2008

Looking for a job

On September I finish my studies of computer science, so I start to search a job. I really enjoyed my current job at Collabora maintaining Empathy, I learned lots of things about the Free Software world and I would like to keep working on free software related projects if possible. My CV is available online here.

Do you guys know any company around the free software and GNOME looking for new employees? You can contact me by email to xclaesse@gmail.com

22 April 2008

Enterprise Social Search slideshow

Enterprise Social Search is a way to search, manage, and share information within a company. Who can help you find relevant information and nothing but relevant information? Your colleagues, of course

Today we are launching at Whatever (the company I work for) a marketing campaign for our upcoming product: Knowledge Plaza. Exciting times ahead!

28 January 2008

Ubuntu stable updates

There was some blog entries this week about GNOME stable updates on Ubuntu. There is no reason new bug fix versions could not be uploaded to stable out of the fact that the SRU rules require to check carrefully all the changes and doing this job on all the GNOME tarballs is quite some work, or the ubuntu desktop team is quite small and already overworked.

There is a list of packages which have a relaxed rules though, we have discussed adding GNOME to those since the stable serie usually has fixes worth having and not too many unstable changes (though the stable SVN code usually doesn’t get lot of testing) and decided than the stable updates which look reasonable should be uploaded to hardy-update.

There was also some concerns about gnome-games, 2.20.3 has been uploaded to gutsy-proposed today which should reduce the number of bugs sent to the GNOME bugzilla. The new dependencies on ggz has also been reviewed and 2.21 should be built soon in hardy.

14 November 2007

GNOME and Ubuntu

The FOSSCamp and UDS week has been nice and a good occasion to talk to upstream and people from other distributions. We had desktop discussions about the new technologies landing in GNOME this cycle (the next Ubuntu will be a LTS so we need a balance between new features and stability), the desktop changes we want to do, and how Ubuntu contributes to GNOME.

Some random notes about the Ubuntu upstream contributions:

  • Vincent asked again for an easy way to browse the Ubuntu patches and Scott picked up the task, the result is available there
  • The new Canonical Desktop Team will focus on making the user experience better, most of the changes will likely be upstream material and discussed there, etc
  • Canonical has open Ubuntu Desktop Infrastructure Developer and Ubuntu Conceptual Interface Designer positions, if you want to do desktop work for a cool open source company you might be interested by those ;-)

GNOME updates in gutsy and hardy

  • Selected GNOME 2.20.1 changes have been uploaded to gutsy-updates
  • The GNOME 2.21.2 packaging has started in hardy, some updates and lot of Debian merges are still on the TODO though
  • We have decided to use tags in patches to indicate the corresponding Ubuntu and upstream bugs so it’s easier to get the context of the change, technical details still need to be discussed though

Update: Scott pointed that you can use http://patches.ubuntu.com/n/nautilus/extracted to access to the current nautilus version

03 November 2007

git commit / darcs record

I’ve been working wit git lately but I have also missed the darcs user interface. I honestly think the darcs user interface is the best I’ve ever seen, it’s such a joy to record/push/pull (when darcs doesn’t eat your cpu) :)

I looked at git add --interactive because it had hunk-based commit, a pre-requisite for darcs record-style commit, but it has a terrible user interface, so i just copied the concept: running a git diff, filtering hunks, and then outputing the filtered diff through git apply --cached.

It supports binary diffs, file additions and removal. It also asks for new files to be added even if this is not exactly how darcs behave but I always forget to add new files, so I added it. It will probably break on some extreme corner cases I haven’t been confronted to, but I gladly accept any patches :)

Here’s a sample session of git-darcs-record script:

$ git-darcs-record
Add file:  newfile.txt
Shall I add this file? (1/1) [Ynda] : y

Binary file changed: document.pdf

Shall I record this change? (1/7) [Ynda] : y

foobar.txt
@@ -1,3 +1,5 @@
 line1
 line2
+line3
 line4
+line5

Shall I record this change? (2/7) [Ynda] : y

git-darcs-record
@@ -1,17 +1,5 @@
 #!/usr/bin/env python

-# git-darcs-record, emulate "darcs record" interface on top of a git repository
-#
-# Usage:
-# git-darcs-record first asks for any new file (previously
-#    untracked) to be added to the index.
-# git-darcs-record then asks for each hunk to be recorded in
-#    the next commit. File deletion and binary blobs are supported
-# git-darcs-record finally asks for a small commit message and
-#    executes the 'git commit' command with the newly created
-#    changeset in the index
-
-
 # Copyright (C) 2007 Raphaël Slinckx
 #
 # This program is free software; you can redistribute it and/or

Shall I record this change? (3/7) [Ynda] : y

git-darcs-record
@@ -28,6 +16,19 @@
 # along with this program; if not, write to the Free Software
 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.

+# git-darcs-record, emulate "darcs record" interface on top of a git repository
+#
+# Usage:
+# git-darcs-record first asks for any new file (previously
+#    untracked) to be added to the index.
+# git-darcs-record then asks for each hunk to be recorded in
+#    the next commit. File deletion and binary blobs are supported
+# git-darcs-record finally asks for a small commit message and
+#    executes the 'git commit' command with the newly created
+#    changeset in the index
+
+
+
 import re, pprint, sys, os

 BINARY = re.compile("GIT binary patch")

Shall I record this change? (4/7) [Ynda] : n

git-darcs-record
@@ -151,16 +152,6 @@ def read_answer(question, allowed_responses=["Y", "n", "d", "a"]):
        return resp

-def setup_git_dir():
-       global GIT_DIR
-       GIT_DIR = os.getcwd()
-       while not os.path.exists(os.path.join(GIT_DIR, ".git")):
-               GIT_DIR = os.path.dirname(GIT_DIR)
-               if GIT_DIR == "/":
-                       return False
-       os.chdir(GIT_DIR)
-       return True
-
 def git_get_untracked_files():

Shall I record this change? (5/7) [Ynda] : y

# On branch master
# Changes to be committed:
#   (use "git reset HEAD file..." to unstage)
#
#       modified:   document.pdf
#       modified:   foobar.txt
#       modified:   git-darcs-record
#       new file:   newfile.txt
#
# Changed but not updated:
#   (use "git add file file..." to update what will be committed)
#
#       modified:   git-darcs-record
#
What is the patch name? Some cute patch name
Created commit a08f34e: Some cute patch name
 4 files changed, 3 insertions(+), 29 deletions(-)
 create mode 100644 newfile.txt

Get the script here: git-darcs-record script and put in somewhere in your $PATH. Any comments or improvements is welcome !

22 January 2007

Un nouveau laptop, sans windows !

Voilà, j’y pensais depuis longtemps et c’est maintenant chose faite, je me suis acheté un tout nouveau ordinateur portable.

Je l’ai acheté sur le site français LDLC.com et me suis renseigné pour savoir si il était possible d’acheter les ordinateurs de leur catalogue sans logiciels (principalement sans windows). Je leur ai donc envoyé un email, et à ma grande surprise ils m’on répondu que c’était tout a fait possible, qu’il suffi de passer commande et d’envoyer ensuite un email pour demander de supprimer les logiciels de la commande. J’ai donc commandé mon laptop et ils m’ont remboursé de 20€ pour les logiciels, ce n’est pas énorme sur le prix d’un portable, mais symboliquement c’est déjà ça.

Toutes fois je me pose des questions, pourquoi cette offre n’est pas inscrite sur le site de LDLC ? En regardant sous mon tout nouveau portable je remarque une chose étrange, les restes d’un autocollant qu’on a enlevé, exactement à l’endroit où habituellement est collé la clef d’activation de winXP. Le remboursement de 20€ tout rond par LDLC me semble également étrange vue que LDLC n’est qu’un intermédiaire, pas un constructeur, et donc eux achètent les ordinateurs avec windows déjà installé. Bref tout ceci me pousse à croire que c’est LDLC qui perd les 20€ et je me demande dans quel but ?!? Pour faire plaisir aux clients libre-istes ? Pour éviter les procès pour vente liée ? Pour à leur tours se faire rembourser les licences que les clients n’ont pas voulu auprès du constructeur/Microsoft et éventuellement gagner plus que 20€ si les licences OEM valent plus que ça ? Bref ceci restera sans doutes toujours un mistère.

J’ai donc installé Ubuntu qui tourne plutôt bien. J’ai été même très impressionné par le network-manager qui me connecte automatiquement sur les réseaux wifi ou filaire selon la disponibilité et qui configure même un réseau zeroconf si il ne trouve pas de server dhcp, c’est très pratique pour transférer des données entre 2 ordinateurs, il suffi de brancher un cable ethernet (ça marche aussi par wifi mais j’ai pas encore testé) entre les 2 et hop tout le réseau est configuré automatiquement sans rien toucher, vraiment magique ! Windows peut aller se cacher, ubuntu est largement plus facile d’utilisation !

20 December 2006

Documenting bugs

I hate having to write about bugs in the documentation. It feels like waving a big flag that says ‘Ok, we suck a bit’.

Today, it’s the way fonts are installed, or rather, they aren’t. The Fonts folder doesn’t show the new font, and the applications that are already running don’t see them.

So I’ve fixed the bug that was filed against the documentation. Now it’s up to someone else to fix the bugs in Gnome.

05 December 2006

Choice and flexibility: bad for docs

Eye of Gnome comes with some nifty features like support for EXIF data in jpegs. But this depends on a library that isn’t a part of Gnome.

So what do I write in the user manual for EOG?

‘You can see EXIF data for an image, but you need to check the innards of your system first.’
‘You can maybe see EXIF data. I don’t know. Ask your distro.’
‘If you can’t see EXIF data, install the libexif library. I’m sorry, I can’t tell you how you can do that as I don’t know what sort of system you’re running Gnome on.’

The way GNU/Linux systems are put together is perhaps great for people who want unlimited ability to customize and choose. But it makes it very hard to write good documentation. In this sort of scenario, I would say it makes it impossible, and we’re left with a user manual that looks bad.

I’ve added this to the list of use cases for Project Mallard, but I don’t think it’ll be an easy one to solve.

Sources

Planète GNOME-FR

Planète GNOME-FR est un aperçu de la vie, du travail et plus généralement du monde des membres de la communauté GNOME-FR.

Certains billets sont rédigés en anglais car nous collaborons avec des gens du monde entier.

Dernière mise à jour :
03 August 2015 à 21:30 UTC
Toutes les heures sont UTC.

Colophon

Planète GNOME-FR est propulsée par l'agrégateur Planet, cron, Python, Red Hat (qui héberge ce serveur).

Le design du site est basé sur celui des sites GNOME et de Planet GNOME.

Planète GNOME-FR est maintenue par Frédéric Péters et Luis Menina. Si vous souhaitez ajouter votre blog à cette planète, il vous suffit d'ouvrir un bug. N'hésitez pas à nous contacter par courriel pour toute autre question.