Shared-Desktop-Ontologies (since 2009): An effort to create an open standard for semantic desktop vocabularies.

Nepomuk (since 2006) – Bringing the semantic desktop to KDE. Started as subproject of a larger Europeean research project, the Nepomuk module has found its place in the core of KDE and aims to slowly refactor the way we handle everyday data. I am the original author, maintainer, and main developer of the Nepomuk libraries, the Nepomuk service system, the Nepomuk virtual filesystem integration, and the semantic desktop extensions.

Soprano (since 2006) – The RDF storage and parsing/serialization framework for Qt4 is used in the Nepomuk project for all data storage. I have been the maintainer and core developer of Soprano since my rewrite for version 2.0.

K3b (1998-2010) – Being the original author of K3b, I maintained it for 11 years in which the CD/DVD burning application became the de-facto standard for Linux. K3b is shipped with most major Linux distributions as the default burning application.

FF/M (2005-2006) – An extension of the FF (Fast Forward) action-planning system to allow the usage of external modules (plugins) in planning domains. The development started as part of my diploma thesis.

Astras (1999-2002) – The Airport Surface TRAffic Simulator, emerged from a software project at the university and became the first product of the company Atrics. Astras is a simulation of the airport forefield and is used as a training tool for airport controllers at the Unique Aiport in Zurich.

Recent Posts

TPAC 2012 – Who Am I (On the Web)?

Last week I attended my first TPAC ever – in Lyon, France. Coming from the open-source world and such events like Fosdem or the ever brilliant Akademy I was not sure what to expect. Should I pack a suite? On arrival all my fears were blown away by an incredibly well organized event with a lot of nice people. I felt very welcome as a newbie, there was even a breakfast for the first-timers with some short presentations to get an overview of the W3C‘s work in general and the existing working groups. So before getting into any details: I would love this to become a regular thing (not sure it will though, seeing that next year the TPAC will be in China).

My main reason for going to the TPAC was identity on the Web, or short WebID. OpenLink Software is a strong supporter of the WebID identification and authentication system. Thus, it was important to be present for the meeting of the WebID community group.

The meeting with roughly 15 people spawned some interesting discussions. The most heatedly debated topic was that of splitting the WebID protocol into two parts: 1. identification and 2. authentication. The reason for this is not at all technical but more political. The WebID protocol which uses public keys embedded in RDF profiles and X.509 certificates which contain a personal profile URL has always had trouble being accepted by several working groups and people. So in order to lower the barrier for acceptance and to level the playing field the idea was to split the part which is indisputable (at least in the semantic web world) from the part that people really have a problem with (TLS).

This lead to a very simple definition of a WebID which I will repeat in my own words since it is not written in stone yet (or rather “written in spec”):

A WebID is a dereferencable URI which denotes an agent (person, organization, or software). It resolves to an RDF profile document uniquely identifying the agent.

Here “uniquely identify” simply means that the profile contains some relation of the WebID to another identifier. This identifier can be an email address (foaf:mbox), it can be a Twitter account, an OpenID, or, to restore the connection to the WebID protocol, a public key.

The nice thing about this separation of identity and authentication is that the WebID is now compatible with any of the authentication systems out there. It can be used with WebID-Auth (this is how I call the X.509 certificate + public key in agent profile system formally known as WebID), but also with OpenID or even with OAuth. Imagine a service provider like Google returning a WebID as part of the OAuth authentication result. In case of an OpenID the OpenID itself could be the WebID or another WebID would be returned after successful authentication. Then the client could dereference it to get additional information.

This is especially interesting when it comes to WebACLs. Now we could imagine defining WebACLs on WebIDs from any source. Using mutual owl:sameAs relations these WebIDs could be made to denote the same person which the authorizing service could then use to build a list of identifiers that map the one used in the ACL rule.

In any case this is a definition that should pose no problems to such working groups as the Linked Data Protocol. Even the OpenID or OAuth community should wee the benefits of identifying people via URIs. In the end the Web is a Web of URIs…

  1. And Now For Something Completely Different: Resizable Bootstrap Modals 4 Replies
  2. Digitally Sign Emails With Your X.509 Certificate in Evolution 7 Replies
  3. Use an X.509 certificate for SSH Login 3 Replies
  4. Virtuoso 6.1.6 and KDE 4.9 6 Replies
  5. Debugging Nepomuk/Virtuoso’s CPU usage 1 Reply
  6. Nepomuk Tasks: KActivityManager Crash 11 Replies
  7. Nepomuk Tasks: Let The Virtuoso Inferencing Begin 8 Replies
  8. Akonadi, Nepomuk, and A Lot Of CPU 13 Replies
  9. Nepomuk Tasks – Sponsor a Bug or Feature 60 Replies