Saturday, July 12. 2014LibreSSL on Gentoo
Yesterday the LibreSSL project released the first portable version that works on Linux. LibreSSL is a fork of OpenSSL and was created by the OpenBSD team in the aftermath of the Heartbleed bug.
Yesterday and today I played around with it on Gentoo Linux. I was able to replace my system's OpenSSL completely with LibreSSL and with few exceptions was able to successfully rebuild all packages using OpenSSL. After getting this running on my own system I installed it on a test server. The Webpage tlsfun.de runs on that server. The functionality changes are limited, the only thing visible from the outside is the support for the experimental, not yet standardized ChaCha20-Poly1305 cipher suites, which is a nice thing. A warning ahead: This is experimental, in no way stable or supported and if you try any of this you do it at your own risk. Please report any bugs you have with my overlay to me or leave a comment and don't disturb anyone else (from Gentoo or LibreSSL) with it. If you want to try it, you can get a portage overlay in a subversion repository. You can check it out with this command: svn co https://svn.hboeck.de/libressl-overlay/ git clone https://github.com/gentoo/libressl.git This is what I had to do to get things running: LibreSSL itself First of all the Gentoo tree contains a lot of packages that directly depend on openssl, so I couldn't just replace that. The correct solution to handle such issues would be to create a virtual package and change all packages depending directly on openssl to depend on the virtual. This is already discussed in the appropriate Gentoo bug, but this would mean patching hundreds of packages so I skipped it and worked around it by leaving a fake openssl package in place that itself depends on libressl. LibreSSL deprecates some APIs from OpenSSL. The first thing that stopped me was that various programs use the functions RAND_egd() and RAND_egd_bytes(). I didn't know until yesterday what egd is. It stands for Entropy Gathering Daemon and is a tool written in perl meant to replace the functionality of /dev/(u)random on non-Linux-systems. The LibreSSL-developers consider it insecure and after having read what it is I have to agree. However, the removal of those functions causes many packages not to build, upon them wget, python and ruby. My workaround was to add two dummy functions that just return -1, which is the error code if the Entropy Gathering Daemon is not available. So the API still behaves like expected. I also posted the patch upstream, but the LibreSSL devs don't like it. So on the long term it's probably better to fix applications to stop trying to use egd, but for now these dummy functions make it easier for me to build my system. The second issue popping up was that the libcrypto.so from libressl contains an undefined main() function symbol which causes linking problems with a couple of applications (subversion, xorg-server, hexchat). According to upstream this undefined symbol is intended and most likely these are bugs in the applications having linking problems. However, for now it was easier for me to patch the symbol out instead of fixing all the apps. Like the egd issue on the long term fixing the applications is better. The third issue was that LibreSSL doesn't ship pkg-config (.pc) files, some apps use them to get the correct compilation flags. I grabbed the ones from openssl and adjusted them accordingly. OpenSSH This was the most interesting issue from all of them. To understand this you have to understand how both LibreSSL and OpenSSH are developed. They are both from OpenBSD and they use some functions that are only available there. To allow them to be built on other systems they release portable versions which ship the missing OpenBSD-only-functions. One of them is arc4random(). Both LibreSSL and OpenSSH ship their compatibility version of arc4random(). The one from OpenSSH calls RAND_bytes(), which is a function from OpenSSL. The RAND_bytes() function from LibreSSL however calls arc4random(). Due to the linking order OpenSSH uses its own arc4random(). So what we have here is a nice recursion. arc4random() and RAND_bytes() try to call each other. The result is a segfault. I fixed it by using the LibreSSL arc4random.c file for OpenSSH. I had to copy another function called arc4random_stir() from OpenSSH's arc4random.c and the header file thread_private.h. Surprisingly, this seems to work flawlessly. Net-SSLeay This package contains the perl bindings for openssl. The problem is a check for the openssl version string that expected the name OpenSSL and a version number with three numbers and a letter (like 1.0.1h). LibreSSL prints the version 2.0. I just hardcoded the OpenSSL version numer, which is not a real fix, but it works for now. SpamAssassin SpamAssassin's code for spamc requires SSLv2 functions to be available. SSLv2 is heavily insecure and should not be used at all and therefore the LibreSSL devs have removed all SSLv2 function calls. Luckily, Debian had a patch to remove SSLv2 that I could use. libesmtp / gwenhywfar Some DES-related functions (DES is the old Data Encryption Standard) in OpenSSL are available in two forms: With uppercase DES_ and with lowercase des_. I can only guess that the des_ variants are for backwards compatibliity with some very old versions of OpenSSL. According to the docs the DES_ variants should be used. LibreSSL has removed the des_ variants. For gwenhywfar I wrote a small patch and sent it upstream. For libesmtp all the code was in ntlm. After reading that ntlm is an ancient, proprietary Microsoft authentication protocol I decided that I don't need that anyway so I just added --disable-ntlm to the ebuild. Dovecot In Dovecot two issues popped up. LibreSSL removed the SSL Compression functionality (which is good, because since the CRIME attack we know it's not secure). Dovecot's configure script checks for it, but the check doesn't work. It checks for a function that LibreSSL keeps as a stub. For now I just disabled the check in the configure script. The solution is probably to remove all remaining stub functions. The configure script could probably also be changed to work in any case. The second issue was that the Dovecot code has some #ifdef clauses that check the openssl version number for the ECDH auto functionality that has been added in OpenSSL 1.0.2 beta versions. As the LibreSSL version number 2.0 is higher than 1.0.2 it thinks it is newer and tries to enable it, but the code is not present in LibreSSL. I changed the #ifdefs to check for the actual functionality by checking a constant defined by the ECDH auto code. Apache httpd The Apache http compilation complained about a missing ENGINE_CTRL_CHIL_SET_FORKCHECK. I have no idea what it does, but I found a patch to fix the issue, so I didn't investigate it further. Further reading: Someone else tried to get things running on Sabotage Linux. Update: I've abandoned my own libressl overlay, a LibreSSL overlay by various Gentoo developers is now maintained at GitHub.
Posted by Hanno Böck
in Code, Cryptography, English, Gentoo, Linux, Security
at
20:31
| Comments (8)
| Trackbacks (5)
Wednesday, June 18. 2014Should science journalists read the studies they write about?
Today I had a little twitter conversation which made me think about the responsibilities a science journalist has. It all started with a quote from Ivan Oransky (who is the editor of Retraction Watch) who said reporting on a study without reading it is 'journalist malpractice'. The source of this is another person who probably just heard him saying that, so I'm not sure what his exact words were.
Admittedly my first thought was: "He is right, too many journalists report about things they don't understand." My second thought was: "If he is right then I am probably guilty of 'journalist malpractice'." So I gave it a second thought and I probably won't agree with the statement any more. I had a quick look at articles I wrote in the past and I have identified the last ten ones that more or less were coverages of a scientific piece of work. I have marked the ones I actually read with a [Y] and the ones I didn't read with a [N]. I've linked the appropriate scientific works and my articles (all in German). I must admit that I defined "read" widely, meaning that I haven't neccesarrily read the whole study/article in detail, I sometimes have just tried to parse the important parts for me.
Now the first thing that comes to mind is that I seem to have become lazier recently in reading studies. I hope this isn't the case and I hoestly think this is mostly coincidence. Now let's get into some details: The first example (the Turing Test) is interesting because it seems there is no scientific publication at all, just a press release. This probably tells you something about the quality of that "research", but while I read the press release I haven't even bothered to check if there is a scientific publication I could read. The second example becomes interesting. I understand enough to know what a "quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic" actually is and I think I also understand what it means, but there's just no way I could understand the paper itself. This is complex mathematics. I seriously doubt that any journalist who covered this work actually read it. If there is I'd like to meet that person. I'm also very sure that the people who wrote the press release overselling this research have neither read this paper nor understood its implications. I think this example gets to the point why I would disagree with the very general statement that a journalist should've read every scientific piece he writes about: It's sometimes so specialized that it's basically impossible. And I don't think this is an out of the line example. Just think about the Higgs Boson: Certainly this is something we want journalists to write about. But I'm pretty sure there are very few - if any - journalists who are able to read the scientific publications that are the basis of this discovery. Some quick notes on the others: Number 4 was part of a 200-page-thesis and the press release was already pretty detailed and technically, I think it was legitimate to not read the original source in that case. Number 5 is somewhat similar to 2, because it is about an algorithm that includes complex math. Number 8 is not really a scientific paper, it is merely a news item on the Nature webpage. In the above list, the only case where I think maybe I should've read the scientific paper and I didn't is the Cochrane-Review on Tamiflu. Conclusion: Don't get me wrong. I certainly welcome the idea that science journalists should have a look into the original scientific papers they write about more often - and this doesn't exclude myself. However, as shown above I doubt that this works in all cases.
Posted by Hanno Böck
in Cryptography, English, Science
at
15:03
| Comments (0)
| Trackbacks (0)
Defined tags for this entry: journalism, science
Sunday, June 15. 2014Slides from cryptography workshop for web developers
I recently held a workshop about cryptography for web developers at the company Internations. I am publishing the slides here.
Part 1: Crypto and Web [PDF] [LaTeX], [Slideshare] Part 2: How broken is TLS? [PDF] [LaTeX], [Slideshare] Part 3: Don't do this yourself [PDF] [LaTeX], [Slideshare] Part 4: Hashing, Tokens, Randomness [PDF] [LaTeX], [Slideshare] Part 5: Don't believe the Crypto Hype [PDF] [LaTeX] [Slideshare] Part 2 is the same talk I recently have at the Easterhegg conference about TLS.
Posted by Hanno Böck
in Code, Cryptography, English, Security
at
13:49
| Comments (0)
| Trackbacks (0)
Defined tags for this entry: crypto, cryptography, http, https, security, ssl, tls, web, websecurity
Friday, June 6. 2014Enabling encryption by default and using HTTPS only
I recently switched my personal web page and my blog to deliver content exclusively encrypted via HTTPS. I want to take this opportunity to give some facts about enabling TLS encryption by default and problems you may face.
First of all the non-problems: Enabling HTTPS by default is almost never a significant performance problem. If people tell me that they can not possibly enable HTTPS due to performance reasons the first thing I ask is if they believe this or if they have real benchmark data showing this. If you don't believe me on that, I can quote Adam Langley from Google here: "In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead." Enabling HTTPS may cause a number of compatibility issues you may not instantly think about. First of all, we know that IPs in the IPv4 space are limited and expensive these days, so many people probably can't afford having a distinct IP for their web page. The solution to that is a TLS extension called SNI (Server Name Indication) which allows to have different certificates for different domain names on the same IP. It works in all major browsers and has been working for quite some time. The only major browser you'll face these days that doesn't support SNI is the Android 2.x browser. There are some subtle issues with SNI. One is that browsers have fallback modes if they cannot connect via TLS and that may lead to a connection downgrade to SSLv3. And that ancient protocol doesn't support extensions and thus no SNI. So you may have irregular certificate errors if you are on a bad connection. A solution to that on the server side is to just disable SSLv3. It will make SNI much more reliable. I don't really have a clear picture how many browsers will fail with SNI. There are probably a number of embedded devices out there like smart TVs with browsers or things alike that have problems. If you have any experiences feel free to post them in the comments. The first issue I only noticed after I switched to HTTPS: I had an application called RSS Graffiti set up to automatically post all articles I write to a facebook fan page. After changing to HTTPS only it silently stopped working. Re-adding my feed didn't work. I now found a similar service called dlvr.it that I now use to post my RSS feed to facebook. I can only assume that this is a glimpse of a much bigger problem: There are probably tons of applications and online services out there not prepared for an encrypted Internet. If we want more people to deploy encryption by default we need to find these issues, document them and hopefully put enough pressure on their developers to fix them. Another yet unfixed issue is the Yandex Bot. Yandex is a search engine and although you may never have heard of it it's probably one of the few companies in this area that can claim to be a serious competitor to Google. The reason you may not know it is that it's mostly operating in Russian language. Depending on who your page visitors are this may matter more or less. The Yandex Bot speaks SSL but according to the Qualys SSL test it only supports the ancient SSLv3. So you have a choice between three possibilities: Don't enable HTTPS by default, enable HTTPS with a shitty configuration supporting ancient technology that will cause trouble for SNI or enable HTTPS with a sane configuration and get no traffic from the leading Russian search engine. None of them sounds very good to me. Another issue is third party content. For security reasons today's browsers block all active HTTP content (CSS, JavaScript etc.) on HTTPS webpages. This isn't much of a problem for me, but it's a problem for webpages that rely on advertising because from what I hear most advertisement providers don't support HTTPS yet (Google being a laudable exception here). This is the main reason you won't see many news webpages enforcing HTTPS. However, I still have passive third party HTTP content on my blog. That's why you'll probably see a yellow warning sign in front of the URL in some browsers. Tuesday, April 29. 2014Incomplete Certificate Chains and Transvalid Certificates
A number of people seem to be confused how to correctly install certificate chains for TLS servers. This happens quite often on HTTPS sites and to avoid having to explain things again and again I thought I'd write up something so I can refer to it. A few days ago flattr.com had a missing certificate chain (fixed now after I reported it) and various pages from the Chaos Computer Club have no certificate chain (not the main page, but several subdomains like events.ccc.de and frab.cccv.de). I've tried countless times to tell someone, but the problem persists. Maybe someone in charge will read this and fix it.
Web browsers ship a list of certificate authorities (CAs) that are allowed to issue certificates for HTTPS websites. The whole system is inherently problematic, but right now that's not the point I want to talk about. Most of the time, people don't get their certificate from one of the root CAs but instead from a subordinate CA. Every CA is allowed to have unlimited numbers of sub CAs. The correct way of delivering a certificate issued by a sub CA is to deliver both the host certificate and the certificate of the sub CA. This is neccesarry so the browser can check the complete chain from the root to the host. For example if you buy your certificate from RapidSSL then the RapidSSL cert is not in the browser. However, the RapidSSL certificate is signed by GeoTrust and that is in your browser. So if your HTTPS website delivers both its own certificate by RapidSSL and the RapidSSL certificate, the browser can validate the whole chain. However, and here comes the tricky part: If you forget to deliver the chain certificate you often won't notice. The reason is that browsers cache chain certificates. In our example above if a user first visits a website with a certificate from RapidSSL and the correct chain the browser will already know the RapidSSL certificate. If the user then surfs to a page where the chain is missing the browser will still consider the certificate as valid. Such certificates with missing chain have been called transvalid, I think the term was first used by the EFF for their SSL Observatory. Chromium with bogus error message on a transvalid certificate So how can you check if you have a transvalid certificate? One way is to use a fresh browser installation without anything cached. If you then surf to a page with a transvalid certificate, you'll get an error message (however, as we've just seen, not neccessarily a meaningful one). An easier way is to use the SSL Test from Qualys. It has a line "Chain issues" and if it shows "None" you're fine. If it shows "Incomplete" then your certificate is most likely transvalid. If it shows anything else you have other things to look after (a common issues is that people unneccesarily send the root certificate, which doesn't cause issues but may make things slower). The Qualys test test will tell you all kinds of other things about your TLS configuration. If it tells you something is insecure you should probably look after that, too.
Posted by Hanno Böck
in Cryptography, English, Linux, Security
at
14:29
| Comment (1)
| Trackback (1)
Thursday, April 24. 2014Easterhegg talk on TLS
Last weekend I was at the Easterhegg in Stuttgart, an event organized by the Chaos Computer Club. I had a talk with the title "How broken is TLS?"
This was quite a lucky topic. I submitted the talk back in January, so I had no idea that the Heartbleed bug would turn up and raise the interest that much. However, it also made me rework large parts of the talk, because after Heartbleed I though I had to get a much broader view on the issues. The slides are here as PDF, here as LaTeX and here on Slideshare. There's also a video recording here (media.ccc.de) and also on Youtube. I also had a short lightning talk with some thoughs on paperless life, however it's only in German. Slides are here (PDF), here (LaTeX) and here (Slideshare). (It seems there is no video recording, if it appears later I'll add the link.)
Posted by Hanno Böck
in Computer culture, Cryptography, English, Life, Security
at
16:37
| Comments (0)
| Trackback (1)
Defined tags for this entry: ccc, cryptography, easterhegg, papierlos, security, slides, ssl, stuttgart, talk, tls
Thursday, March 6. 2014Diffie Hellman and TLS with nonsense parameters
tl;dr A very short key exchange crashes Chromium/Chrome. Other browsers accept parameters for a Diffie Hellman key exchange that are completely nonsense. In combination with recently found TLS problems this could be a security risk.
People who tried to access the webpage https://demo.cmrg.net/ recently with a current version of the Chrome browser or its free pendant Chromium have experienced that it causes a crash in the Browser. On Tuesday this was noted on the oss-security mailing list. The news spread quickly and gave this test page some attention. But the page was originally not set up to crash browsers. According to a thread on LWN.net it was set up in November 2013 to test extremely short parameters for a key exchange with Diffie Hellman. Diffie Hellman can be used in the TLS protocol to establish a connection with perfect forward secrecy. For a key exchange with Diffie Hellman a server needs two parameters, those are transmitted to the client on a connection attempt: A large prime and a so-called generator. The size of the prime defines the security of the algorithm. Usually, primes with 1024 bit are used today, although this is not very secure. Mostly the Apache web server is responsible for this, because before the very latest version 2.4.7 it was not able to use longer primes for key exchanges. The test page mentioned above tries a connection with 16 bit - extremely short - and it seems it has caught a serious bug in chromium. We had a look how other browsers handle short or nonsense key exchange parameters. Mozilla Firefox rejects connections with very short primes like 256 bit or shorter, but connections with 512 and 768 bit were possible. This is completely insecure today. When the Chromium crash is prevented with a patch that is available it has the same behavior. Both browsers use the NSS library that blocks connections with very short primes. The test with the Internet Explorer was a bit difficult because usually the Microsoft browser doesn't support Diffie Hellman key exchanges. It is only possible if the server certificate uses a DSA key with a length of 1024 bit. DSA keys for TLS connections are extremely rare, most certificate authorities only support RSA keys and certificates with 1024 bit usually aren't issued at all today. But we found that CAcert, a free certificate authority that is not included in mainstream browsers, still allows DSA certificates with 1024 bit. The Internet Explorer allowed only connections with primes of 512 bit or larger. Interestingly, Microsoft's browser also rejects connections with 2048 and 4096 bit. So it seems Microsoft doesn't accept too much security. But in practice this is mostly irrelevant, with common RSA certificates the Internet Explorer only allows key exchange with elliptic curves. Opera is stricter than other browsers with short primes. Connections below 1024 bit produce a warning and the user is asked if he really wants to connect. Other browsers should probably also reject such short primes. There are no legitimate reasons for a key exchange with less than 1024 bit. The behavior of Safari on MacOS and Konqueror on Linux was interesting. Both browsers accepted almost any kind of nonsense parameters. Very short primes like 17 were accepted. Even with 15 as a "prime" a connection was possible. No browser checks if the transmitted prime is really a prime. A test connection with 1024 bit which used a prime parameter that was non-prime was possible with all browsers. The reason is probably that testing a prime is not trivial. To test large primes the Miller-Rabin test is used. It doesn't provide a strict mathematical proof for primality, only a very high probability, but in practice this is good enough. A Miller-Rabin test with 1024 bit is very fast, but with 4096 bit it can take seconds on slow CPUs. For a HTTPS connection an often unacceptable delay. At first it seems that it is irrelevant if browsers accept insecure parameters for a key exchange. Usually this does not happen. The only way this could happen is a malicious server, but that would mean that the server itself is not trustworthy. The transmitted data is not secure anyway in this case because the server could send it to third parties completely unencrypted. But in connection with client certificates insecure parameters can be a problem. Some days ago a research team found some possibilities for attacks against the TLS protocol. In these attacks a malicious server could pretend to another server that it has the certificate of a user connecting to the malicious server. The authors of this so-called Triple Handshake attack mention one variant that uses insecure Diffie Hellman parameters. Client certificates are rarely used, so in most scenarios this does not matter. The authors suggest that TLS could use standardized parameters for a Diffie Hellman key exchange. Then a server could check quickly if the parameters are known - and would be sure that they are real primes. Future research may show if insecure parameters matter in other scenarios. The crash problems in Chromium show that in the past software wasn't very well tested with nonsense parameters in cryptographic protocols. Similar tests for other protocols could reveal further problems. The mentioned tests for browsers are available at the URL https://dh.tlsfun.de/. This text is mostly a translation of a German article I wrote for the online magazine Golem.de.
Posted by Hanno Böck
in Cryptography, English, Linux, Security
at
19:27
| Comments (2)
| Trackbacks (0)
Defined tags for this entry: chrome, chromium, crash, cryptography, diffiehellman, forwardsecrecy, keyexchange, security, ssl, tls
Saturday, January 26. 2013Explain hard stuff with the 1000 most common words #UPGOERFIVE
Based on the XKCD comic "Up Goer Five", someone made a nice little tool: An online text editor that lets you only use the 1000 most common words in English. And ask you to explain a hard idea with it.
Nice idea. I gave it a try. The most obvious example to use was my diploma thesis (on RSA-PSS and provable security), where I always had a hard time to explain to anyone what it was all about. Well, obviously math, proof, algorithm, encryption etc. all are forbidden, but I had a hard time with the fact that even words like "message" (or anything equivalent) don't seem to be in the top 1000. Here we go: When you talk to a friend, she or he knows you are the person in question. But when you do this a friend far away through computers, you can not be sure. That's why computers have ways to let you know if the person you are talking to is really the right person. The ways we use today have one problem: We are not sure that they work. It may be that a bad person knows a way to be able to tell you that he is in fact your friend. We do not think that there are such ways for bad persons, but we are not completely sure. This is why some people try to find ways that are better. Where we can be sure that no bad person is able to tell you that he is your friend. With the known ways today this is not completely possible. But it is possible in parts. I have looked at those better ways. And I have worked on bringing these better ways to your computer. So - do you now have an idea what I was taking about? I found this nice tool through Ben Goldacre, who tried to explain randomized trials, blinding, systematic review and publication bias - go there and read it. Knowing what publication bias and systematic reviews are is much more important for you than knowing what RSA-PSS is. You can leave cryptography to the experts, but you should care about your health. And for the record, I recently tried myself to explain publication bias (german only).
Posted by Hanno Böck
in Cryptography, English, Life, Science, Security
at
11:51
| Comments (0)
| Trackbacks (0)
Saturday, January 19. 2013How to configure your HTTPS server
Yesterday, we had a meeting at CAcert Berlin where I had a little talk about how to almost-perfectly configure your HTTPS server. Motivation for that was the very nice Qualys SSL Server test, which can remote-check your SSL configuration and tell you a bunch of things about it.
While playing with that, I created a test setup which passes with 100 points in the Qualys test. However, you will hardly be able to access that page, which is mainly due to it's exclusive support for TLS 1.2. All major browsers fail. Someone from the audience told me that the iPhone browser was successfully able to access the page. To safe the reputation of free software, someone else found out that the Midori browser is also capable of accessing it. I've described what I did there on the page itself and you may also read it here via http. Here are my slides "SSL, X.509, HTTPS - How to configure your HTTPS server" as ODP, as PDF and on Slideshare. And some links mentioned in the slides: Check SSL and SSH weak keys due to broken random numbers EFF SSL Observatory Sovereign Keys proect Some great talks on the mentioned topics by others: Facthacks Talk 29c3 MD5 considered harmful today - Creating a rogue CA Certificate Is the SSLiverse a safe place? Update: As people seem to find these browser issue interesting: It's been pointed out that the iPad Browser also works. Opera with TLS 1.2 enabled seems to work for some people, but not for me (maybe Windows-only). luakit and epiphany also work, but they don't check certificates at all, so that kind of doesn't count.
Posted by Hanno Böck
in Computer culture, Cryptography, Gentoo, Linux
at
11:45
| Comments (5)
| Trackbacks (0)
Defined tags for this entry: ca, cacert, certificate, cryptography, encryption, https, security, ssl, tls, x509
Wednesday, October 17. 2012Das Ärgernis GEMA
Über die GEMA könnte man viel schreiben und sich viel ärgern, etwa jedes zweite mal, wenn man auf ein Youtube-Video klickt und erzählt bekommt, dass dieses Video "in Deinem Land nicht verfügbar" ist. Ein besonders krasses Ärgernis ist aber die sogenannte GEMA-Vermutung. Sie bewirkt letztendlich, dass die GEMA Geld für Musiker kassieren kann, die überhaupt nicht bei ihr Mitglied sind.
Wie funktioniert das? Grob gesagt sagt die GEMA-Vermutung folgendes: Man geht davon aus, dass fast alle Musik von der GEMA vertreten wird und somit im Zweifel immer angenommen wird, dass die GEMA Ansprüche auf Zahlungen hat. Konkret: Wer ein Konzert ausrichten, eine CD produzieren oder sonst etwas mit Musik machen will, muss damit rechnen, von der GEMA behelligt zu werden - und zwar auch dann, wenn die betroffenen Künstler nicht bei der GEMA Mitglied sind. Denn beweisen muss man das selbst. Das führt zu absurden Situationen: Das Landgericht Mannheim forderte etwa, dass alle Musiker, die in der Freiburger KTS über mehrere Jahre aufgetreten sind, als Zeugen persönlich vor Gericht erscheinen sollten. Eine schriftliche Erklärung der Musiker reichte nicht aus. Kürzlich Schlagzeilen machte ein Fall, in dem die Musikpiraten an die GEMA zahlen sollten, weil auf einem von ihnen veröffentlichten Sampler ein Musiker vertreten war, der mit einem Pseudonym auftrat und seinen bürgerlichen Namen nicht nennen wollte. Auch hier bekam die GEMA recht. Die GEMA-Vermutung ist in der heutigen Zeit völlig absurd. Eingeführt wurde sie, als tatsächlich noch davon ausgegangen werden konnte, dass die Mehrzahl der erfolgreicheren Künstler in ihr Mitlgied waren. Heute gibt es aber zahllose Musiker und Künstler, die andere Wege gehen, die auf Creative Commons-Lizenzen setzen und die mit der GEMA nichts am Hut haben. Warum schreibe ich das alles? Weil gerade beim Bundestag eine Petition gegen die GEMA-Vermutung läuft - die ihr bitte alle SOFORT unterstützen solltet. Denn sie läuft nur noch bis heute abend und braucht noch gut 10.000 Unterstützer, damit es zu einer Anhörung kommt.
Posted by Hanno Böck
in Cryptography, Politics
at
11:00
| Comments (0)
| Trackbacks (0)
Defined tags for this entry: bundestag, copyright, creativecommons, epetition, gema, gemavermutung, kts, musik, musikpiraten, petition
Thursday, October 27. 2011Verteidigung meiner Diplomarbeit
Die Verteidigung meiner Diplomarbeit über RSA-PSS an der HU Berlin wird am 10. November stattfinden. Die Veranstaltung ist öffentlich (17:00 Uhr s. t., Rudower Chausee 25, Campus Berlin-Adlershof, Raum 3'113). Achtung: Termin und Ort verschoben.
Hier die Ankündigung: Das Verschlüsselungs- und Signaturverfahren RSA ist das mit Abstand am häufigsten eingesetzte Public Key-Verfahren. RSA kann nicht in seiner ursprünglichen Form eingesetzt werden, da hierbei massive Sicherheitsprobleme auftreten. Zur Vorverarbeitung ist ein sogenanntes Padding notwendig. Bislang wird hierfür meist eine simple Hash-Funktion eingesetzt. Schon 1996 stellten Mihir Bellare und Philipp Rogaway für Signaturen ein verbessertes Verfahren mit dem Namen "Probabilistic Signature Scheme" (PSS) vor. Es garantiert unter bestimmten Annahmen "beweisbare" Sicherheit. In der Diplomarbeit wurde untersucht, welche Vorteile RSA-PSS gegenüber früheren Verfahren bietet und inwieweit RSA-PSS in verbreiteten Protokollen bereits zum Einsatz kommt. Weiterhin wurde eine Implementierung des Verfahrens für X.509-Zertifikate für die nss-Bibliothek erstellt. nss wird unter anderem von Mozilla Firefox und Google Chrome eingesetzt. Friday, August 12. 2011OpenLeaks doing strange things with SSL
OpenLeaks is a planned platform like WikiLeaks, founded by ex-Wikileaks member Daniel Domscheit-Berg. It's been announced a while back and a beta is currently presented in cooperation with the newspaper taz during the Chaos Communication Camp (where I am right now).
I had a short look and found some things noteworthy: The page is SSL-only, any connection attempt with http will be forwarded to https. When I opened the page in firefox, I got a message that the certificate is not valid. That's obviously bad, although most people probably won't see this message. What is wrong here is that an intermediate certificate is missing - we have a so-called transvalid certificate (the term "transvalid" has been used for it by the EFF SSL Observatory project). Firefox includes the root certificate from Go Daddy, but the certificate is signed by another certificate which itself is signed by the root certificate. To make this work, one has to ship the so-called intermediate certificate when opening an SSL connection. The reason why most people won't see this warning and why it probably went unnoticed is that browsers remember intermediate certificates. If someone ever was on a webpage which uses the Go Daddy intermediate certificate, he won't see this warning. I saw it because I usually don't use Firefox and it had a rather fresh configuration. There was another thing that bothered me: On top of the page, there's a line "Before submitting anything verify that the fingerprints of the SSL certificate match!" followed by a SHA-1 certificate fingerprint. Beside the fact that it's english on a german page, this is a rather ridiculous suggestion. Checking a fingerprint of an SSL connection against one you got through exactly that SSL connection is bogus. Checking a certificate fingerprint doesn't make any sense if you got it through a connection that was secured with that certificate. If checking a fingerprint should make sense, it has to come through a different channel. Beside that, nowhere is explained how a user should do that and what a fingerprint is at all. I doubt that this is of any help for the targetted audience by a whistleblower platform - it will probably only confuse people. Both issues give me the impression that the people who designed OpenLeaks don't really know how SSL works - and that's not a good sign.
Posted by Hanno Böck
in Computer culture, Cryptography, English, Security
at
17:26
| Comments (6)
| Trackbacks (2)
Saturday, July 30. 2011Using EFF SSL Observatory to find weak keys in CAcert(c) EFF, Creative Commons by I did some checks on the all_certs table selecting the certificates from cacert. I found out that there were 143 valid certificates with 512 bit. That is completely insecure and breakable by a home computer today. I also found that the majority of certificates still has 1024 bit, which by today's standards should be considered harmful - there have been no public breaks yet, but it's expected that it's possible to build an RSA-1024 cracker for an attacker with enough money. I did the following query on the database: SELECT RSA_Modulus_Bits, count(*) FROM all_certs WHERE `Validity:Not After datetime` > '2010-03-08' AND ( `Issuer` like '%CAcert.org%' OR `Issuer` like '%cacert.org') GROUP BY `RSA_Modulus_Bits` ORDER BY count(*); +------------------+----------+ Now, what further checks can we do? I checked for the RSA exponent. I found two certificates in the database with exponent 3. RSA with low exponent is also considered insecure, although one has to state that this is not a serious issue. RSA with low exponents is not insecure by itself, but it can create vulnerabilities in combination with other issues (if you're interested in details, read my diploma thesis). I have not checked the CAcert database for the Debian SSL vulnerability, as that would've been non-trivial. There were scripts shipped with the SSL Observatory data, but I found them not easy to use, so I skipped that part. My suggestions to cacert were to revoke all certificates with serious issues (like the 512 bit certificates). Also, I suggested that new certificates with insecure settings like RSA below 2048 bits or a low exponent should not be allowed. CAcert did most of this. By now, all 512 bit certificates should be revoked and it is impossible to create new ones below 1024 bit or with low exponents. It is however still possible to create 1024 bit certificates, which is due to a limitation in the client certificate creation script for the Internet Explorer. They say they're working on this and plan to prevent 1024 bit certificates in the future. They also told me that they've checked for the Debian SSL bug. I've reported the issue on the 11th March and got a reply on the same day - that's pretty okay, one slight thing still: There was no security contact with a PGP key listed on the webpage (but I got a PGP-encrypted contact once I asked for it). That's not good, I expect especially from a security project that I can contact them for security issues with encrypted mail. One can also argue if four months is a bit long to fix such an issue, but as it was far away from being trivial, this can be apologized. I'd say that I'm quite satisfied with the reactions of CAcert. I always got fast replies to questions I had and the issues were resolved in a proper way. I have other points of criticism on the security of CAcert, the issue that bothers me most is that they still use SHA-1 and refuse to switch to a more secure hashing algorithm like SHA-512, although all major browsers have support for this since a long time. I want to encourage others to do further tests on CAcert. I'd like to see CAcert being an authority that does better than the commercial ones. The database from the observatory is a treasure and should be used by projects like CAcert to improve their security. Wednesday, May 4. 2011Diploma thesis on RSA-PSS finished
Today I submitted my diploma thesis to my university.
The thesis summarizes several months of investigation of the Probabilistic Signature Scheme (PSS). Traditionally, RSA signatures are done by hashing and then signing them. PSS is an improved, provable secure scheme to prepare a message before signing. The main focus was to investigate where PSS is implemented and used in real world cryptographic applications with a special focus on X.509. During my work on that, I also implemented PSS signatures for the nss library in the Google Summer of Code 2010. The thesis itself (including PDF and latex sources), patches for nss and everything else relevant can be found at http://rsapss.hboeck.de. Thursday, April 21. 2011X.509 / SSL certificate test cases
https is likely the most widely used cryptographic protocol. It's based on X.509 certificates. There's a living debate how useful this concept is at all, mainly through the interesting findings of the EFF SSL Observatory. But that won't be my point today.
Pretty much all webpage certificates use RSA and sadly, the vast majority still use insecure hash algorithms. But it is rarely known that the X.509 standards support a whole bunch of other public key algorithms. I've set up a page with a couple of test-cases for less-often used algorithm combinations. At the moment, it's mainly focused on RSASSA-PSS, but I plan to add elliptic curve algorithms soon. As I won't get any certificate authority to sign me certificates with anything else than classic RSA, I created my own testing root CA. I'd be very interested to get some feedback. If you happen to have some interesting OS/Browser combination, please import the root certificate and send me a screenshot where I can see how many green ticks there are (post a link to the screenshot in the commends or send it via email). At the moment, I'm especially looking for people to test:
« previous page
(Page 2 of 5, totaling 63 entries)
» next page
|
About meYou can find my web page with links to my work as a journalist at https://hboeck.de/.
You may also find my newsletter about climate change and decarbonization technologies interesting. Hanno Böck mail: hanno@hboeck.de Hanno on Mastodon Impressum Show tagged entries |