Happy Birthday, Debian!


Debian 25 years Thank you! Image

Image taken from https://bits.debian.org/2018/08/debian-is-25.html, published under the MIT License (see: https://www.debian.org/license)

On Aug 16th, 1993 the 20 year old Ian Murdock released a message on comp.os.linux.development (that’s a Usenet Newsgroup – Usenet was the way to connect to and interact on the Internet before there was the World Wide Web that you are using now (that’s why you have to put www infront of most websites), and Newsgroups where the forums/message boards of the Usenet) announcing his new Linux distribution called Debian (named after his then girlfriend/later wife Debra and himself Ian).

Before distributions, Linux only consisted of the kernel itself that was in source code – so you had to download that onto your host system, compile it, and by hand from your host system set up your machine with all the components needed to run it as a Linux system. This was really complicated, and so distributions arose, allowing the user to simply install the system from a bootable device:

Debian will contain a installation procedure that doesn’t need to beĀ  babysat; simply install the basedisk, copy the distribution disks to the harddrive, answer some question about what packages you want or don’t want installed, and let the machine install the release while you do more interesting things

For the elder generations, 25 years might not seem as much. But watching back, this actually makes Debian one of the oldest Linux distributions and probably the oldest one that is still actively used; in comparison, the Linux kernel itself is just 2 years older! Praised for its stability it is the choice of system administrators.

Debian’s package management system is – as far as I could research it – the oldest and therefore first one, and it is still actively used and ported to different systems even today: .dep packages were ported to the UNIX System V via OpenSolaris, a port to BSD via the UNIX-like FreeBSD, which also macOS is derived from, and speaking of Apple – Fink brought Debian’s package management system to macOS and Cydia to iOS. According to Murdock himself, package management is “the single biggest advancement Linux has brought to the industry”. He himself later worked for Docker, which is – if you will – in a way an even bigger package managing software in the sense that it not only packages the software itself but also the entire running system.

It also is the basis of countless derivates, from which the most famous or important are probably Knoppix, Grml, Kali Linux, Raspbian and of course the most important of all: Ubuntu, that itself is the basis of countless additional derivates (the Wikipedia has a great graph showing all Linux distributions and how they a related to each other – Debian makes up for more than a third of that space). And Distrowatch’s Top 10 Distributions list (which actually contains 11 distros) lists 3 Debian-based distros, including Debian itself.

So it’s more than fair to say, that Debian had a great impact on the Linux world, helped shape it to what it is today. I myself therefore wish the Debian system and it’s developing team a very happy birthday, and hope that it will see the next 25 years with same prosperity!

And if you are puzzled as to why it is I talk about Debian – well I myself used that distribution for a year in my early Linux days. I started out with S.u.S.E. Linux 6.3 at arround 1998-2000. I got fed up with it after they introduced YaST2 for SuSE 8.0 and switched to Debian 3.0 aka Woody in 2004. However, we weren’t meant to be for each other. It was incredible outdated, which made me really laugh at this passage of the Newsgroup post:

Debian will contain the most up-to-date of everything.

While other distributions of that times where enjoying Kernel version 2.6, Debian came with 2.2 in the stable branch. It was also missing a lot of software – which at that time where Linux software was still a rare thing, meant a lot, especially in the everyday personal desktop PC context. After mixing stable, testing and unstable packages and dreading every update, because it meant another weekend trying to repair the system, I once more switched – to Gentoo.

However, I was always fond of Debian, their goals and their Manifesto. Debian has always been my goto distribution for running servers. And lately – well, in April I switched from Gentoo to Ubuntu as my main driver. For mainly two reasons:

  1. With the last months and years Gentoo has become more and more unstable. While in my beginning years I could simply run emerge -avuND world && shutdown, go to bed and have an up-to-date system the next morning, nowadays most times this fails and I need to spend hours and days fixing it. That was my main reason to leave Debian long time ago – now it’s the reason for me leaving a distribution that has been my main driver for 14 years. That’s hard, yes. But also somewhat exciting.
  2. I need certain software that unfortunately is just developed for Ubuntu and does not work under other Linux distributions – either at all, or only with limitations. I am really frustrated about that, because developing software only for one certain distribution is not at all the Linux way and shouldn’t be rewarded – however if you need the software…. This list of software includes (without being complete):
      • ROS: Supported platforms are Ubuntu and sometimes Debian. On Gentoo installation was tricky but possible, however some packages weren’t ported. The maintainer is fast to react, but as I really needed one package really fast, the switch was inevitable.
      • Rock: Like ROS but with real-time support from Orocos, developed by the DFKI. Like ROS it’s Ubuntu only, even if the website says something different. I tried the installation under Gentoo, Arch and macOS and blatantly failed. During my time at the DFKI I did not meet anyone using anything else than Ubuntu, and even on Ubuntu installation was buggy as hell.
      • Unreal Engine: While under Linux you always have to build it from source, and it is not Ubuntu-only (and provides installation pointers for CentOS, Fedora, Arch and Mint), UE4 runs more smothly on Ubuntu and is easier to build than it was under Gentoo.

While Ubuntu wouldn’t have been my first choice I have to say that I am pretty happy with it. It’s different for sure and I still have a number of smaller issues, but we get along. On the longrun however I might also be looking into Debian once again.

PS: On a rather sad note – in 2015 for publicly unknown reasons the founder and inventor of Debian (an American who was actually born in Konstanz, Germany!) killed himself on rather mysterious circumstances after having something that could be called a nervous breakdown at the age of 42, leaving behind three children. May he rest in peace. Your legacy will live on!

Automount a specific USB drive


Actually this is a straight forward thing, however since it has been a while I had to google it myself, and was astonished about how many non-working solutions I found, besides solutions that simply mount every USB according to their label. So here is the straight forward solution to mount a specific USB device to a specific location on your Unix hierarchical file system, using udev. It assumes that you have a running version of udev, and the udev tools. If not, please consult the distribution specific documentation on the Linux distribution of your choice. This might include recompiling your Kernel, as udev will need the following settings:

General setup --->
[*] Configure standard kernel features (expert users) --->
[ ] Enable deprecated sysfs features to support old userspace tools
[*] Enable signalfd() system call
Enable the block layer --->
[*] Block layer SG support v4
Networking support --->
Networking options --->
<*> Unix domain sockets
Device Drivers --->
Generic Driver Options --->
() path to uevent helper
[*] Maintain a devtmpfs filesystem to mount at /dev
< > ATA/ATAPI/MFM/RLL support (DEPRECATED) --->
File systems --->
[*] Inotify support for userspace
Pseudo filesystems --->
[*] /proc file system support
[*] sysfs file system support

As for Gentoo Linux, the other things you will want to do, is to add “udev” to your USE-flags (by adding it into your /etc/portage/make.conf), get udev installed (calling emerge -avuD sys-fs/udev), and add udev to your sysinit runlevel (rc-update add udev sysinit).

Now to the fun part. First of all you need to get some information about the device you are interested in. There are a number of ways, like using udev monitor, etc. Most of them, to me however, are too messy. If you have no idea about your device and still need to figure things out, blkid -o list will show you a nice table of all devices, their device file, file system type, label, mount point and UUID – everything you need. For me, I know I have a stick with the label “Public” on an OS X with file system type exFat, and now I inserted it into a dual boot Linux with a number of partitions:

ancalagon ~ # blkid -o list
device                            fs_type      label         mount point                           UUID
---------------------------------------------------------------------------------------------------------------------------------------
/dev/sdb1                         vfat                       /boot                                 58FB-332D
/dev/sdb2                         swap                       [SWAP]                                73db158f-0e19-4d17-8c88-a8b0c1dff1f3
/dev/sdb3                         ext4                       /home                                 355af6d8-6f03-4a98-9a45-edafc3ccedde
/dev/sdb4                         ext4                       /                                     63be67f3-5c7c-48ea-a8b3-58dff9da1737
/dev/sda1                         ntfs         Wiederherstellung (not mounted)                     562065062064EF05
/dev/sda2                         vfat                       (not mounted)                         6265-B138
/dev/sda4                         ntfs                       (not mounted)                         6C58731C5872E46C
/dev/sdc1                         exfat        Private       /media/private                        56B6-CE90
/dev/sdd1                         exfat        Public        (not mounted)                         56BE-6477
/dev/sda3                                                    (not mounted)

If you want more information, with the device file you can get it with:

ancalagon ~ # udevadm info /dev/sdd1

I want the stick to be mounted at /media/public, so I need to create a rule file; on Gentoo it lies under /etc/udev/rules.d/90-local-usb.rules. Actually the name is totally arbitrary, except for the number at the beginning, and the extension that always has to be .rules. The number should be something high, because we want udev to first run all other rules (e.g. the ones that assign the device to a device file) before running ours. 90 is a good value for that.

So in my case, this is what I added:

SUBSYSTEMS=="usb", ENV{ID_FS_UUID}=="56BE-6477", ACTION=="add", RUN+="/usr/bin/logger --tag udev Mounting public", RUN+="/bin/mount -o umask=0077,nosuid,uid=1000,gid=1001 '%E{DEVNAME}' /media/public"

We need to provide the system or subsystem, which for an USB device is usb. The UUID comes from blkid and identifies the device. The action triggers when to run the command. In our case, when a new USB device is added and it has the UUID we want. And finally the mount command. I’ve added another command such that there is a log entry but thta is no need. And as I want it to be accessible as user, I added uid and gid accordingly. If you need to find out your user and group id, just run:

ancalagon ~ # id -u 
ancalagon ~ # id -g 

And that’s it. If you want to see if the rule triggers, just run

ancalagon ~ # tail -f /var/log/everything/current

It should output:

[udev] Mounting public

somewhere. And you can <em>simulate</em> the USB event with udevadm, by triggering the rule you just wrote (although this is rather interesting for more general rules that should fit more than just one device). This is how it’s done:

ancalagon ~ # udevadm trigger --action="add" --property-match=ID_FS_UUID="56BE-6477"

El Capitan es una mierda


ā€¦ or to put it in English: Mac OS X 10.11 – called ā€žEl Capitanā€œ sucks. Now while you wonā€™t see much of the changes in your everyday life, if you are just an email, internet and office application user – once you drill down a bit, there are a huge bunch of problems.

First, and that is one that deeply annoys me: Photos. Before Yosemite there was iPhoto which was pretty neat already, with all the functions that a hobby photographer would enjoy – face recognition, tags, meta data information, a map pinpointing every photo you took (you could actually click on it and then click on the location and see all photos). It was great.
If you wanted more: Aperture was the power app – being basically like iPhoto, but allowing for more and specialized photo editing, support of multiple libraries, etc.

All that was gone with the last updates. Instead of iPhoto and Aperture, Apple decided to introduce Photos. It is basically the same app that runs on iPhone, and besides a minimum basic things it does not allow you to do anything. All the cool features from iPhoto and Aperture are missing. And while Photos might be great on an mobile device – on a laptop or stationary device this sucks. No album support, just the zoom in and out into a stream of photos, hardly any editing support, no batch support (I mean, wtf? You seriously believe that Aperture users will dig this?). But it getā€™s worse. While first applauding the simplicity and pointing out that now you only need to know one app, that works on all devices the same – with El Capitan they introduce new features, that distinguishes the OS X version from the mobile version again. And guess what. A few selected features that we knew from Photos and Aperture are now being sold as new innovative ideas. NOW you can filter your photos by location – seriously? Who the hell are you trying to kid, Apple?!

So the new features in El Capitan are – hold your hats – the ability to add and edit location information, to edit meta data in batches, and they re-introduced the sidebar with some ā€žnewā€œ features, including – finally – the ability to have third party editing plug-ins if you are not happy with the limited filters, Apple provide.

But actually I digress. I didnā€™t want to talk about Photos, and if you are interested in that, there is a tons of places on the net, where professional photographers that where content with iPhoto and Aperture express their feelings towards the new Photos.

Although I handle a lot of photos, that is just a hobby, and if there is some serious editing needed there are alternatives like Photoshop.

Continue reading

[Ruby] Equal is not always equal


The reason for this blog article is a question that dealt with the different ways of checking equality in Ruby, or more specifically the so called “threequals” operator method. You might have come across it, it is the three equals sign ===, a very Ruby specific thing. Even though everybody calls it operator, I think for the understanding it is crucial to be specific here – “threequals” is a method (of an object), not an operator. I stress this, as this is not true for other object oriented languages, such as Java, in which all infix operators (such as ==) are simply part of the language, i.e. their definition exists outside the object world.

After the before mentioned question I struggled myself of finding any good explanation of the different methods Ruby provides of comparing things, which is why I decided to write down the things I told him; and additionally place it into the context of all four methods, Ruby offers. Yes, there are four ways of comparison, which is twice as much as languages such as Java or Smalltalk offer (and think of C++ which just knows just the == comparison). To fully appreciate the differences, let me start of with the first and typical stumbling block, every novice programmer encounters: value equality vs. reference equality (skip this if you are familiar with the concept).

Continue reading

FATAL: modpost: GPL-incompatible module nvidia.ko uses GPL-only symbol ‘flush_workqueue’


I got the above error message after reinstalling my Gentoo Linux on a new harddrive. Untypical for a Gentoo distribution user I still fetch my Linux Kernel from the official website and compile it,

  1. because you get a Kernel unmeddled with by any distributor
  2. you have to compile it manually anyways (because genkernel sucks)
  3. because back in the old days, the Kernel sources where not in portage and therefore I am used to fetching it by hand

So my last Kernel was still a 4.0, while the new one now is a 4.2. I love Linux and distributions like Gentoo because they are text-based – I just copy and paste my configs and everything runs smoothly. Well – in theory.

The reallity of course often looks different. And this error is one example. It occurs for instance when you try to get the nVidia drivers on Gentoo with the current Kernel 4.2 – distribution independantly (as you can see here for instance).

So, when you look into the error and what’s causing it, it seems that in version 4.2 the Kernel developers assigned a different copyright license to the flush_workqueue function, i.e. usage allowed for GPL software only. And as the nVidia drivers are closed source…

Now there is an easy way to surpass this problem (see below), which would allow you to install the nVidia drivers anyway. However this would be a deliberate breach of the copyright license and would be at least rude to the developers.

Yet it seems like even the developers are not sure if that was a good idea – it rather looks like a refactoring error, that is probably being reversed any time from now. There is at least a message patching it for the unstable 4.3, and it was suggested as patch for 4.2-r5, but even in 4.2-rc8 this patch has not been included yet. So it’s now at least two months that neither nVidia nor the Linux Kernel have reacted to the problem; so either you downgrade the Kernel (which is a bunch of work), or simply patch it yourselfe šŸ™‚ Even though one might argue that it is a license breech, I do not believe that it is the intention of Linux to render your machine useless (or give you extra work by downgrading) and it probably will be reversed soon – and if not, nVidia will react with a new driver version. Until then this workaround will allow you to use your nVidia graphics card with the Linux Kernel 4.2

You just need to open the file /usr/src/linux/kernel/workqueue.c in your favorite editor and search for the line

EXPORT_SYMBOL_GPL(flush_workqueue);

(in my Kernel sources it’s the linke 2617).

The extra _GPL part is responsible that the compilation crashes when the function is used in code that is not GPL licensed. So just delete it, so the line looks like this:

EXPORT_SYMBOL(flush_workqueue);

Now just recompile your Kernel. If you haven’t changed anything else, it should be suffice to call:

/usr/src/linux $ make clean && make && make install

and reboot your system. If you however have problems with modules afterwards (e.g. because you activated hashes for your modules so they don’t work with other Kernel builds), then do a

/usr/src/linux $ make modules_install

And there you go. Now your nVidia drivers should install and run smoothly.

On a side note:

Continue reading

Mercurial style aliases in git – and far more!


A cool thing that I really liked on mercurial and that I miss in git is the possibility of using shortcuts. Mercurial has them hard-wired. For instance, instead of typing hg status you could also type hg st or if you want to commit something: hg ci -m "Commit Message", which comes far handier than using hg commit and then waiting for the editor to start up and save the commit message file. There is also su for summary, and co for checkout. And this really saves you a lot of typing time, as these are the common tasks you want to use.

Git doesn’t come with that possibility, and while I wasn’t using it more often than Mercurial I didn’t bother to figure out how I could make git use some shortcuts. That’s of course possible, as Git is highly configurable. Now, I looked it up and I found out that there’s a highly sophisticated way of doing so.

Just open up your ~/.gitconfig. In it you will already find something like this. You’ve set this up for your first git usage – if you haven’t used git you’ll need to do it, otherwise git won’t work:

[user]
   name = Your Name
   email = youraddy@yourprovider.tld

What we’ll add is a new section, starting with [alias]

Now for each alias we want to set up, we write “alias = command”. So, this is basically exactly the way you would do your custom aliases in Mercurial if you ever did any. If you haven’t, for instance let’s get a shortcut for status:

[alias]
  st = status
[user]
  name = Your Name
  email = youraddy@yourprovider.tld

We can even add flags, if we want. So if status is too chatty for you, try using the short flag:

[alias]
  st = status -s

Now when calling git st you get something like:

D README.md
A cli_ruby.bibtex
M notes.md
?? code/01_have_a_purpose/todo/

with A being a file added, D a file deleted, M for a modified file and ?? for untracked files. If you at any time prefer the chatty version for just this time, telling you how many commits your local HEAD is away from origin, and how to reverse changes, just use git status again.

Of course, the power comes, when using highly sophisticated commands that you cannot possibly type every time you want to use them. For instance, here’s a shortcut that not only sets the graph, abbrev-commit flags, but also sets date to relative and formats the message in a short and neat way and presents them in custom colors:

[alias]
  st = status -s
  lg = log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr)%Creset %Cblue<%an>%Creset' --abbrev-commit --date=relative

The above I found at Shawn Biddles git config. He’s really a great guy, you should visit his site and YouTube channel, as well as his dotfiles on GitHub – there’s a lot to learn. One great additional feature of aliases is that you can even execute shell scripts or use other aliases. One example:

You work with a team on a project and you only want to display your own commits in the log. One can achieve this by hardcoding the name into the alias:

  my = log --author='Your Name'

That’s handy, but if you want to share your config with friends, they will have to alter the config. And even you might have different users and want to keep it flexible. So instead you could use the git function to get certain lines of your config, and execute it inside your alias

  my = log --author="$(git config user.name)"

So that’s already neat, but let’s go further: Say you have different aliases that all need to use the current username, for instance you might want to get all the commits done by you today. Now you know a good programmer is a lazy programmer, so rather than retype “config user.name” so many times, let’s use an alias for it:

  me = config user.name

And now we can do:

  me = config user.name
  my = log --author="`git me`"
  today = log --since=midnight --author="`git me`"
  yesterday = log --since=day.before.yesterday.midnight --until=midnight --author=`git me`

Pretty neat, huh?

Heartbleed


This topic is so important that I will cover it in both, German and English. Please scroll down, for the English version. And there will be no tl;dr! If you are too lazy to read on something that important, but keep on using the Internet, you probably deserve to be hacked! Please pay special attention on what to do and what don’t to do, if you aren’t interested about my media critics part!

Es herrscht doch einige Verunsicherung bezogen auf die Schwachstelle, welche sich Heartbleed nennt. Was soll das Ć¼berhaupt sein? Ich nutze diese Software – OpenSSL – garnicht! Ich muss jetzt SOFORT alle Passwƶrter Ƥndern! etc.

Gerade die viel gelesene SPON zeigt mal wieder, was unsere Medien am besten kƶnnen: Panik machen! Anstatt aufzuklƤren geht es um Verschwƶrungstheorien, Fingerzeigen und schwachsinnigen Aufforderungen. SPON sei hier nur exemplarisch zu nehmen – auch wenn die meiner Meinung nach den Vogel mal wieder vollends abschieƟen.

Das Problem

Warum Heartbleed potentiell jeden betrifft? OpenSSL ist ein sogenanntes Framework, d.h. es ist Software, die dafĆ¼r gedacht ist, dass sie von anderer Software eingebunden wird. Dies wird gemacht, da es zum einen blƶdsinn ist, wenn jeder das Rad neu erfindet. Gerade bei Sicherheitssoftware ist das auch schwer mƶglich, denn mathematisch sichere Verfahren sind sehr kompliziert und die wenigsten Firmen haben das Know-How, dass fĆ¼r so etwas notwendig ist. Und so nutzen nach SchƤtzungen 2/3 der Webdienste OpenSSL fĆ¼r die sichere Ɯbertragung von Daten. Deswegen ist tatsƤchlich so gut wie jeder betroffen.

Continue reading

SERIOUS: Update your iDevice and don’t use Safari!


In case you haven’t heard: There is a SERIOUS Bug in iOS/OS X, which is affecting SSL/TSL and basically rendering it ineffective. SSL/TSL is used to encrypt and protect data send via secure connections, e.g. using HTTPS to shop with Amazon, or for your online banking, or sending your passwords encrypted to the networks, e.g. Mail passwords, etc. You are especially vulnerable if you’re outside your secured network (e.g. office or home network), i.e. in a shared network, such as wireless hotspots, mobile network, etc.

Both iOS and OS X are affected, for iOS Apple has already released patches, and they even include the devices that are officially not supported anymore, i.e. 3GS and iPod Touch. For those devices you are to UPDATE to version 6.1.6, all newer devices are to UPDATE to version 7.0.6.

Unfortunately for OS X the patch is still developed, so here you’ll want to check your software status regularly and untill then DO NOT use the Safari browser. You’ll be fine using Firefox, Opera or Chrome, which offer their own implementation of SSL/TSL.

Other applications affected are:

  • Calendar
  • Facetime
  • Keynote
  • Twitter
  • Mail
  • iBooks
  • Software Update

Or to put it short: all Apple software (and third-party software using the Apple Security Framework) that provides ways to connect to servers. It should be relatively safe using these applications at home, but UNDER NO CIRCUMSTANCES should you use them in wireless networks that other people can use as well, i.e. anywhere outside your secured home network.

For more information also read the article Why Apples Huge Security Flaw is so Scary!

Java just won’t manage Non-Primitive Numbers


I just stumbled over something really funny; something that would fit to the Watman lightning talk by Gary Bernhardt. As you may know, Java is jet another language that copied a really great programming language of the visionary Alan Kay, named Smalltalk. In Smalltalk everything is an Object (yes, really really everything! So even classes are objects, describes by meta-classes – even creating a subclass is done by sending the superclass a message, to which it replies with a subclass!) and all programming is done by objects sending and receiving messages, and answering to them. This is why in OOP, you don’t have functions but methods. A function can be called – it can be applied to values. So you actually define what to do. In OOP, an object decides on how to properly react on a message. It does so by looking up it’s methods on how to react on the received message. But how it will react is totally up to the object, it’s even possible that many different objects reply to the same message by using different methods; a function on the other hand is unique. But the key idea is that while in imperative programming you see and apply the function, in object-oriented programming this is a black-box to you (unless you programmed the method).

But I digress. Coming back to Java, when it was developed, it wanted to implement OOP, but on the other hand it also wanted to keep up with the speed popularity of C. So not only did the syntax change to be more C-like, but also a lot of things that work purely object-oriented in Smalltalk where implemented as they are in C. This also applies to numbers, they are primitives; values that lie in RAM, values on which functions are applied to. Now, this may sound totally normal to us – we don’t think of a number as an object, which we ask to do something, and then see if it does it, or not. But with Smalltalk, that’s exactly how it’s done:

1 class
=> SmallInteger

1 resondsTo: #+
=> true

Other pure object-oriented languages, such as Objective-C or Ruby (example below) behave in a similar way:

1.class
=> Fixnum

1.respond_to?(:+)
=> true

So how does it work? + is a method, same as any other method – with the exception that it is written in a special way, so that it can be used more human-readable, in an infix notation. #+ and :+ are symbols in Smalltalk/Ruby, which are used to identify the method. Say you have a method called println(), the symbol would be :println in Ruby or #println in Smalltalk. Ruby allows us to send a message to a object not only by naming the method, but also by using the send()-Method, that every object understands, and where the first argument is the the symbol of the method and the following arguments are arguments accepted by the method. So, here it should become obvious that + is actually a method and the second number (another object) an argument:

1.send(:+, 2)
=> 3

The Answer to the message is actually a new object. Not so in Java. Primitives are non-object values

System.out.println(1.getClass());

=> Unresolved compilation problem:
Cannot invoke getClass() on the primitive type int

Continue reading

researchr on Linux (Addemdum to Computer-aided Scientific Workflow)


In my last blog entry, I presented the extemporary, yet neat solution for an academic workflow, which was unfortunately limited to Mac. Though I own a MacBook myself, I am also very passionate about Linux and like to find solutions that are plattform-independent.

I already suspected, that given setup should be easily portable to Linux, only the Open Source toolsĀ Skim and BibDesk would take some deeper programming, as they depend on the Cocoa Library. But one could find alternatives:

So the presented workflow should be more or less reproducible on a Linux system. Get a feel for why this workflow is ā€“ in my opinion ā€“ ingenious, and then try porting it to Linux. The community of scientists using Linux will most definitely appreciate it.

One way to replace Skim and BibDesk would be to turn to an integrated solution, such as the Cross-Platform solution Mendeley, which uses the Qt framework and is therefor available for Linux, Mac and Windows. It is similar to Papers, but in my opinion, Mendeley seems to be much leaner. It also offers some social-network features, that other reference management systems lack.

The Ph.D. student Bodong Chen (who incidentally also studies at the University of Toronto) tried it, and seems to have succeeded. On his researchr-Wiki he gives some pointers on how it’s done (and it seems, like he also tried to do so on Windows, but as I suspected, there seems to be no success).

So, to all you Linux-Heads out there, here’s a solution for you, too. Try it out, make it better, document it, and give me a link, if you do šŸ˜‰