Entries tagged as security
Thursday, April 10. 2014
Ich halte morgen (11.04.2014) einen Vortrag im Hackerspace AFRA in Berlin. Hier die Ankündigung:
Beliebte Webanwendungen und Content-Management-Systeme haben regelmäßig Sicherheitslücken. Nutzer müssen diese Anwendungen regelmäßig updaten, aber viele Betreiber von Webseiten sind sich dessen nicht bewusst. Im Rahmen des Betriebs von Servern mit einigen Hundert Kunden habe ich das Tool FreeWVS entwickelt, mit dem sich Webanwendungen mit bekannten Sicherheitslücken erkennen lassen. Wenn man Updates versäumt, tauchen fast zwangsweise irgendwann gehackte Webanwendungen auf. Diese aufzuspüren ist aber nicht unbedingt trivial. Wenn es zu spät ist, wird der eigene Server unter Umständen zur Spamschleuder oder wird für DDoS-Attacken missbraucht.
Beginn 20:00 Uhr, die AFRA befindet sich in der Herzbergstraße 55, nahe der Tram-Haltestelle Haltestelle Herzbergstraße/Siegfriedstraße.
Update: Die Slides gibt's hier als PDF, hier als Latex-Source und auf Slideshare.
Wednesday, April 2. 2014
Ich hatte ja vor einer Weile zu der Botnetz-Geschichte und dem BSI etwas geschrieben. Das BSI hatte im Januar die Möglichkeit angeboten, zu prüfen, ob die eigene Mailadresse sich in einem Datenbestand befindet, der offenbar bei der Analyse eines Botnetzes gefunden wurde. Ich hatte eine betroffene Mailadresse und fand die Information, dass irgendwo irgendein Account von mir offenbar gehackt wurde, reichlich unnütz und habe deshalb das BSI nach Bundesdatenschutzgesetz um weitere Auskünfte gebeten. Ich habe bisher noch nicht berichtet, wie das eigentlich weiterging.
Das BSI hatte mir daraufhin mitgeteilt, dass es selbst die Daten gar nicht besitzt. Vielmehr hätte man nur die Hashes der betroffenen Mailadressen gespeichert. Die vollständigen Daten lägen nur bei den zuständigen Strafverfolgungsbehörden vor. Welche das sind teilte man mir nicht mit, aber aufgrund von Medienberichten wusste ich, dass es sich um die Staatsanwaltschaft Verden handeln musste.
Also stellte ich eine erneute Anfrage an die Staatsanwaltschaft Verden. Diese teilte mir dann mit, was bereits in diesem Artikel bei Golem.de stand: Das Datenschutzgesetz greife nach Ansicht der Staatsanwaltschaft in diesem Fall nicht, es sei stattdessen die Strafprozessordnung relevant. Meine Anfrage könne man nicht beantworten, weil sie zum einen den Ermittlungserfolg gefährde und zum anderen der Aufwand nicht verhältnismäßig sei.
Ich fragte daraufhin nochmal beim niedersächsischen Datenschutzbeauftragten und bei der Bundesdatenschutzbeauftragten nach. Von ersterem erhielt ich eine kurze Antwort, die die juristische Ansicht der Staatsanwaltschaft leider bestätigte. Vom Büro der Bundesdatenschutzbeauftragten erhielt ich überhaupt keine Antwort. Weiter käme ich wohl nur mit Rechtsanwalt. Insofern war die Angelegenheit damit - unbefriedigend - beendet.
Am Ende bleibt für mich die unerfreuliche Erkenntnis, dass der Auskunftsanspruch im Bundesdatenschutzgesetz offenbar deutlich weniger wirksam ist als ich das erwartet hätte. Und eine sinnvolle Information, welche Accountdaten von mir gehackt wurden, habe ich weiterhin nicht.
Thursday, March 6. 2014
tl;dr A very short key exchange crashes Chromium/Chrome. Other browsers accept parameters for a Diffie Hellman key exchange that are completely nonsense. In combination with recently found TLS problems this could be a security risk.
People who tried to access the webpage https://demo.cmrg.net/ recently with a current version of the Chrome browser or its free pendant Chromium have experienced that it causes a crash in the Browser. On Tuesday this was noted on the oss-security mailing list. The news spread quickly and gave this test page some attention. But the page was originally not set up to crash browsers. According to a thread on LWN.net it was set up in November 2013 to test extremely short parameters for a key exchange with Diffie Hellman. Diffie Hellman can be used in the TLS protocol to establish a connection with perfect forward secrecy.
For a key exchange with Diffie Hellman a server needs two parameters, those are transmitted to the client on a connection attempt: A large prime and a so-called generator. The size of the prime defines the security of the algorithm. Usually, primes with 1024 bit are used today, although this is not very secure. Mostly the Apache web server is responsible for this, because before the very latest version 2.4.7 it was not able to use longer primes for key exchanges.
The test page mentioned above tries a connection with 16 bit - extremely short - and it seems it has caught a serious bug in chromium. We had a look how other browsers handle short or nonsense key exchange parameters.
Mozilla Firefox rejects connections with very short primes like 256 bit or shorter, but connections with 512 and 768 bit were possible. This is completely insecure today. When the Chromium crash is prevented with a patch that is available it has the same behavior. Both browsers use the NSS library that blocks connections with very short primes.
The test with the Internet Explorer was a bit difficult because usually the Microsoft browser doesn't support Diffie Hellman key exchanges. It is only possible if the server certificate uses a DSA key with a length of 1024 bit. DSA keys for TLS connections are extremely rare, most certificate authorities only support RSA keys and certificates with 1024 bit usually aren't issued at all today. But we found that CAcert, a free certificate authority that is not included in mainstream browsers, still allows DSA certificates with 1024 bit. The Internet Explorer allowed only connections with primes of 512 bit or larger. Interestingly, Microsoft's browser also rejects connections with 2048 and 4096 bit. So it seems Microsoft doesn't accept too much security. But in practice this is mostly irrelevant, with common RSA certificates the Internet Explorer only allows key exchange with elliptic curves.
Opera is stricter than other browsers with short primes. Connections below 1024 bit produce a warning and the user is asked if he really wants to connect. Other browsers should probably also reject such short primes. There are no legitimate reasons for a key exchange with less than 1024 bit.
The behavior of Safari on MacOS and Konqueror on Linux was interesting. Both browsers accepted almost any kind of nonsense parameters. Very short primes like 17 were accepted. Even with 15 as a "prime" a connection was possible.
No browser checks if the transmitted prime is really a prime. A test connection with 1024 bit which used a prime parameter that was non-prime was possible with all browsers. The reason is probably that testing a prime is not trivial. To test large primes the Miller-Rabin test is used. It doesn't provide a strict mathematical proof for primality, only a very high probability, but in practice this is good enough. A Miller-Rabin test with 1024 bit is very fast, but with 4096 bit it can take seconds on slow CPUs. For a HTTPS connection an often unacceptable delay.
At first it seems that it is irrelevant if browsers accept insecure parameters for a key exchange. Usually this does not happen. The only way this could happen is a malicious server, but that would mean that the server itself is not trustworthy. The transmitted data is not secure anyway in this case because the server could send it to third parties completely unencrypted.
But in connection with client certificates insecure parameters can be a problem. Some days ago a research team found some possibilities for attacks against the TLS protocol. In these attacks a malicious server could pretend to another server that it has the certificate of a user connecting to the malicious server. The authors of this so-called Triple Handshake attack mention one variant that uses insecure Diffie Hellman parameters. Client certificates are rarely used, so in most scenarios this does not matter. The authors suggest that TLS could use standardized parameters for a Diffie Hellman key exchange. Then a server could check quickly if the parameters are known - and would be sure that they are real primes. Future research may show if insecure parameters matter in other scenarios.
The crash problems in Chromium show that in the past software wasn't very well tested with nonsense parameters in cryptographic protocols. Similar tests for other protocols could reveal further problems.
The mentioned tests for browsers are available at the URL https://dh.tlsfun.de/.
This text is mostly a translation of a German article I wrote for the online magazine Golem.de.
Thursday, January 23. 2014
Das Bundesamt für Sicherheit in der Informationstechnik (BSI) macht ja seit vorgestern mächtig Wirbel um einige Zugangsdaten, die sie angeblich aus einem Botnetz haben. Leider informiert das BSI bislang nur sehr spärlich über Details. Ich habe, nachdem die zugehörige Webseite nach einigen Stunden wieder einigermaßen erreichbar war, verschiedene von mir in der Vergangenheit genutzte Mailadressen prüfen lassen. Bei einer Mailadresse eines großen deutschen Freemail-Anbieters, die ich vor langer Zeit als primäre Mailadresse genutzt hatte, schlug der Test an und ich bekam eine der Warnmails (ich dokumentiere die Mail weiter unten). Das ist jetzt aus zwei Gründen interessant:
1. Ich habe diese Mailadresse seit ungefähr zehn Jahren nicht genutzt. Ich habe alle Accounts, die ich aktiv nutze, auf meine aktuelle Mailadresse auf eigener Domain umgestellt. Das bedeutet: Die Daten, die das BSI da hat, sind also - zumindest teilweise - uralt.
Eine Sache, die hier vielleicht spannend ist: Im November letztes Jahr gab es einen größeren Leak von Accountdaten bei Adobe. Da war ein Account mit dieser Mailadresse dabei (fragt mich nicht warum ich irgendwann einen Adobe-Account hatte, wie gesagt, ist mindestens zehn Jahre her). Es ist natürlich reine Spekulation, aber es scheint mir zumindest vorstellbar, dass es sich bei den BSI-Daten schlicht um die selben Daten handelt. Rein zeitlich würde es ins Bild passen. ( Update: Mehrere Leute teilten mir mit dass sie vom Adobe-Leak betroffene Mailadressen haben, die das BSI nicht kennt, also Spekulation höchstwahrscheinlich falsch)
2. Ich erhalte hier eigentlich eine völlig nutzlose Warnung und unpraktikable Tipps. Denn was mir das BSI letztendlich mitteilt: Sie haben Zugangsdaten zu irgendeinem Account irgendwo im Zusammenhang mit einer Mailadresse von mir. Oder, um das BSI zu zitieren: "Dieses Konto verwenden Sie möglicherweise bei einem Sozialen Netzwerk, einem Online-Shop, einem E-Mail-Dienst, beim Online-Banking oder einem anderen Internet-Dienst."
Verbunden ist das ganze mit dem kaum umsetzbaren Vorschlag, ich solle doch am besten alle meine Passwörter ändern.
Was ich ja jetzt gern wüsste: Weiß das BSI, um was für einen Account es geht? Und falls ja: Warum teilen sie es mir nicht mit? Ich werde zumindest versuchen, darauf eine Antwort zu erhalten. Laut Bundesdatenschutzgesetz ist das BSI verpflichtet, mir Auskünfte über Daten, die sie über mich gespeichert haben, zu erteilen.
Hier die vollständige Mail, die man vom BSI erhält:
Sehr geehrte Dame, sehr geehrter Herr,
Sie haben diese E-Mail erhalten, weil die E-Mail-Adresse [...] auf der Webseite www.sicherheitstest.bsi.de eingegeben und überprüft wurde.
Die von Ihnen angegebene E-Mail-Adresse [...] wurde zusammen mit dem Kennwort eines mit dieser E-Mail-Adresse verknüpften Online-Kontos von kriminellen Botnetzbetreibern gespeichert. Dieses Konto verwenden Sie möglicherweise bei einem Sozialen Netzwerk, einem Online-Shop, einem E-Mail-Dienst, beim Online-Banking oder einem anderen Internet-Dienst.
Um diesen Missbrauch zukünftig zu verhindern, empfiehlt das BSI die folgenden Schritte:
1. Überprüfen Sie Ihren eigenen Rechner sowie weitere Rechner, mit denen Sie ins Internet gehen, mittels eines gängigen Virenschutzprogramms auf Befall mit Schadsoftware.
2. Ändern Sie alle Passwörter, die Sie zur Anmeldung bei Online-Diensten nutzen.
3. Lesen Sie die weiteren Informationen hierzu unter www.sicherheitstest.bsi.de.
Diese E-Mail ist vom BSI signiert. Wie Sie die Signatur überprüfen können erfahren Sie auch unter www.sicherheitstest.bsi.de.
Mit freundlichen Grüßen
Ihr BSI-Sicherheitstest-Team
Saturday, January 19. 2013
Yesterday, we had a meeting at CAcert Berlin where I had a little talk about how to almost-perfectly configure your HTTPS server. Motivation for that was the very nice Qualys SSL Server test, which can remote-check your SSL configuration and tell you a bunch of things about it.
While playing with that, I created a test setup which passes with 100 points in the Qualys test. However, you will hardly be able to access that page, which is mainly due to it's exclusive support for TLS 1.2. All major browsers fail. Someone from the audience told me that the iPhone browser was successfully able to access the page. To safe the reputation of free software, someone else found out that the Midori browser is also capable of accessing it. I've described what I did there on the page itself and you may also read it here via http.
Here are my slides "SSL, X.509, HTTPS - How to configure your HTTPS server" as ODP, as PDF and on Slideshare.
And some links mentioned in the slides:
Check SSL and SSH weak keys due to broken random numbers
EFF SSL Observatory
Sovereign Keys proect
Some great talks on the mentioned topics by others:
Facthacks Talk 29c3
MD5 considered harmful today - Creating a rogue CA Certificate
Is the SSLiverse a safe place?
Update: As people seem to find these browser issue interesting: It's been pointed out that the iPad Browser also works. Opera with TLS 1.2 enabled seems to work for some people, but not for me (maybe Windows-only). luakit and epiphany also work, but they don't check certificates at all, so that kind of doesn't count.
Thursday, October 27. 2011
Die Verteidigung meiner Diplomarbeit über RSA-PSS an der HU Berlin wird am 10. November stattfinden. Die Veranstaltung ist öffentlich (17:00 Uhr s. t., Rudower Chausee 25, Campus Berlin-Adlershof, Raum 3'113). Achtung: Termin und Ort verschoben.
Hier die Ankündigung:
Das Verschlüsselungs- und Signaturverfahren RSA ist das mit Abstand am häufigsten eingesetzte Public Key-Verfahren. RSA kann nicht in seiner ursprünglichen Form eingesetzt werden, da hierbei massive Sicherheitsprobleme auftreten. Zur Vorverarbeitung ist ein sogenanntes Padding notwendig. Bislang wird hierfür meist eine simple Hash-Funktion eingesetzt. Schon 1996 stellten Mihir Bellare und Philipp Rogaway für Signaturen ein verbessertes Verfahren mit dem Namen "Probabilistic Signature Scheme" (PSS) vor. Es garantiert unter bestimmten Annahmen "beweisbare" Sicherheit.
In der Diplomarbeit wurde untersucht, welche Vorteile RSA-PSS gegenüber früheren Verfahren bietet und inwieweit RSA-PSS in verbreiteten Protokollen bereits zum Einsatz kommt. Weiterhin wurde eine Implementierung des Verfahrens für X.509-Zertifikate für die nss-Bibliothek erstellt. nss wird unter anderem von Mozilla Firefox und Google Chrome eingesetzt.
Friday, August 12. 2011
OpenLeaks is a planned platform like WikiLeaks, founded by ex-Wikileaks member Daniel Domscheit-Berg. It's been announced a while back and a beta is currently presented in cooperation with the newspaper taz during the Chaos Communication Camp (where I am right now).
I had a short look and found some things noteworthy:
The page is SSL-only, any connection attempt with http will be forwarded to https. When I opened the page in firefox, I got a message that the certificate is not valid. That's obviously bad, although most people probably won't see this message.
What is wrong here is that an intermediate certificate is missing - we have a so-called transvalid certificate (the term "transvalid" has been used for it by the EFF SSL Observatory project). Firefox includes the root certificate from Go Daddy, but the certificate is signed by another certificate which itself is signed by the root certificate. To make this work, one has to ship the so-called intermediate certificate when opening an SSL connection.
The reason why most people won't see this warning and why it probably went unnoticed is that browsers remember intermediate certificates. If someone ever was on a webpage which uses the Go Daddy intermediate certificate, he won't see this warning. I saw it because I usually don't use Firefox and it had a rather fresh configuration.
There was another thing that bothered me: On top of the page, there's a line "Before submitting anything verify that the fingerprints of the SSL certificate match!" followed by a SHA-1 certificate fingerprint. Beside the fact that it's english on a german page, this is a rather ridiculous suggestion. Checking a fingerprint of an SSL connection against one you got through exactly that SSL connection is bogus. Checking a certificate fingerprint doesn't make any sense if you got it through a connection that was secured with that certificate. If checking a fingerprint should make sense, it has to come through a different channel. Beside that, nowhere is explained how a user should do that and what a fingerprint is at all. I doubt that this is of any help for the targetted audience by a whistleblower platform - it will probably only confuse people.
Both issues give me the impression that the people who designed OpenLeaks don't really know how SSL works - and that's not a good sign.
Saturday, July 30. 2011
I've written in the past about the EFF SSL Observatory. It's a great project that has scanned the whole IP space for SSL-certificates used in HTTPS. They provide a database with meta information and their project found a couple of issues where CAs have issued certificates with weak security settings and violation of their own policies. CAcert is a project which tries to be the "better SSL authority" - it issues certificates for free, based on a web-of-trust community. The CAcert root certificate is not part of any major web browser. The EFF has mainly analyzed the browser-accepted CAs - but they provide the data, so I could do it myself.
I did some checks on the all_certs table selecting the certificates from cacert. I found out that there were 143 valid certificates with 512 bit. That is completely insecure and breakable by a home computer today. I also found that the majority of certificates still has 1024 bit, which by today's standards should be considered harmful - there have been no public breaks yet, but it's expected that it's possible to build an RSA-1024 cracker for an attacker with enough money.
I did the following query on the database:
SELECT RSA_Modulus_Bits, count(*) FROM all_certs WHERE `Validity:Not After datetime` > '2010-03-08' AND ( `Issuer` like '%CAcert.org%' OR `Issuer` like '%cacert.org') GROUP BY `RSA_Modulus_Bits` ORDER BY count(*);
+------------------+----------+ | RSA_Modulus_Bits | count(*) | +------------------+----------+ [...] | 512 | 143 | | 4096 | 632 | | 2048 | 3716 | | 1024 | 5790 | +------------------+----------+
Now, what further checks can we do? I checked for the RSA exponent. I found two certificates in the database with exponent 3. RSA with low exponent is also considered insecure, although one has to state that this is not a serious issue. RSA with low exponents is not insecure by itself, but it can create vulnerabilities in combination with other issues (if you're interested in details, read my diploma thesis).
I have not checked the CAcert database for the Debian SSL vulnerability, as that would've been non-trivial. There were scripts shipped with the SSL Observatory data, but I found them not easy to use, so I skipped that part.
My suggestions to cacert were to revoke all certificates with serious issues (like the 512 bit certificates). Also, I suggested that new certificates with insecure settings like RSA below 2048 bits or a low exponent should not be allowed. CAcert did most of this. By now, all 512 bit certificates should be revoked and it is impossible to create new ones below 1024 bit or with low exponents. It is however still possible to create 1024 bit certificates, which is due to a limitation in the client certificate creation script for the Internet Explorer. They say they're working on this and plan to prevent 1024 bit certificates in the future. They also told me that they've checked for the Debian SSL bug.
I've reported the issue on the 11th March and got a reply on the same day - that's pretty okay, one slight thing still: There was no security contact with a PGP key listed on the webpage (but I got a PGP-encrypted contact once I asked for it). That's not good, I expect especially from a security project that I can contact them for security issues with encrypted mail. One can also argue if four months is a bit long to fix such an issue, but as it was far away from being trivial, this can be apologized.
I'd say that I'm quite satisfied with the reactions of CAcert. I always got fast replies to questions I had and the issues were resolved in a proper way. I have other points of criticism on the security of CAcert, the issue that bothers me most is that they still use SHA-1 and refuse to switch to a more secure hashing algorithm like SHA-512, although all major browsers have support for this since a long time.
I want to encourage others to do further tests on CAcert. I'd like to see CAcert being an authority that does better than the commercial ones. The database from the observatory is a treasure and should be used by projects like CAcert to improve their security.
Thursday, April 21. 2011
https is likely the most widely used cryptographic protocol. It's based on X.509 certificates. There's a living debate how useful this concept is at all, mainly through the interesting findings of the EFF SSL Observatory. But that won't be my point today.
Pretty much all webpage certificates use RSA and sadly, the vast majority still use insecure hash algorithms. But it is rarely known that the X.509 standards support a whole bunch of other public key algorithms.
I've set up a page with a couple of test-cases for less-often used algorithm combinations. At the moment, it's mainly focused on RSASSA-PSS, but I plan to add elliptic curve algorithms soon. As I won't get any certificate authority to sign me certificates with anything else than classic RSA, I created my own testing root CA.
I'd be very interested to get some feedback. If you happen to have some interesting OS/Browser combination, please import the root certificate and send me a screenshot where I can see how many green ticks there are (post a link to the screenshot in the commends or send it via email).
At the moment, I'm especially looking for people to test: - Internet Explorer 9 on Windows 7
- Safari on latest MacOS X
- Internal browser on iPhone (I don't know if it's possible to install a new certificate authority there)
Saturday, February 26. 2011
The Electronic Frontier Foundation is running a fascinating project called the SSL Observatory. What they basically do is quite simple: They collected all SSL certificates they could get via https (by scanning all possible IPs), put them in a database and made statistics with them.
For an introduction, watch their talk at the 27C3 - it's worth it. For example, they found a couple of "Extended Validation"-Certificates that clearly violated the rules for extended validation, including one 512-bit EV-certificate.
The great thing is: They provide the full mysql database for download. I took the time to import the thing locally and am now able to run my own queries against it.
Let's show some examples: I'm interested in crypto algorithms used in the wild, so I wanted to know which are used in the wild at all. My query:
SELECT `Signature Algorithm`, count(*) FROM valid_certs GROUP BY `Signature Algorithm` ORDER BY count(*); shows all signature algorithms used on the certificates.
And the result:
+--------------------------+----------+ | Signature Algorithm | count(*) | +--------------------------+----------+ | sha512WithRSAEncryption | 1 | | sha1WithRSA | 1 | | md2WithRSAEncryption | 4 | | sha256WithRSAEncryption | 62 | | md5WithRSAEncryption | 29958 | | sha1WithRSAEncryption | 1503333 | +--------------------------+----------+ Nothing very surprising here. Seems nobody is using anything else than RSA. The most popular hash algorithm is SHA-1, followed by MD5. The transition to SHA-256 seems to go very slowly (btw., the most common argument I heared when asking CAs for SHA-256 certificates was that Windows XP before service pack 3 doesn't support that). The four MD2-certificates seem interesting, though even that old, it's still more secure than MD5 and provides a similar security margin as SHA-1, though support for it has been removed from a couple of security libraries some time ago.
This query was only for the valid certs, meaning they were signed by any browser-supported certificate authority. Now I run the same query on the all_certs table, which contains every cert, including expired, self-signed or otherwise invalid ones:
+-------------------------------------------------------+----------+ | Signature Algorithm | count(*) | +-------------------------------------------------------+----------+ | 1.2.840.113549.27.1.5 | 1 | | sha1 | 1 | | dsaEncryption | 1 | | 1.3.6.1.4.1.5849.1.3.2 | 1 | | md5WithRSAEncryption ANDALSO md5WithRSAEncryption | 1 | | ecdsa-with-Specified | 1 | | dsaWithSHA1-old | 2 | | itu-t ANDALSO itu-t | 2 | | dsaWithSHA | 3 | | 1.2.840.113549.1.1.10 | 4 | | ecdsa-with-SHA384 | 5 | | ecdsa-with-SHA512 | 5 | | ripemd160WithRSA | 9 | | md4WithRSAEncryption | 15 | | sha384WithRSAEncryption | 24 | | GOST R 34.11-94 with GOST R 34.10-94 | 25 | | shaWithRSAEncryption | 50 | | sha1WithRSAEncryption ANDALSO sha1WithRSAEncryption | 72 | | rsaEncryption | 86 | | md2WithRSAEncryption | 120 | | GOST R 34.11-94 with GOST R 34.10-2001 | 378 | | sha512WithRSAEncryption | 513 | | sha256WithRSAEncryption | 2542 | | dsaWithSHA1 | 2703 | | sha1WithRSA | 60969 | | md5WithRSAEncryption | 1354658 | | sha1WithRSAEncryption | 4196367 | +-------------------------------------------------------+----------+ It seems quite some people are experimenting with DSA signatures. Interesting are the number of GOST-certificates. GOST was a set of cryptography standards by the former soviet union. Seems the number of people trying to use elliptic curves is really low (compared to the popularity they have and that if anyone cares for SSL performance, they may be a good catch). For the algorithms only showing numbers, 1.2.840.113549.1.1.10 is RSASSA-PSS (not detected by current openssl release versions), 1.3.6.1.4.1.5849.1.3.2 is also a GOST-variant (GOST3411withECGOST3410) and 1.2.840.113549.27.1.5 is unknown to google, so it must be something very special.
Wednesday, January 5. 2011
If you've read my last blog entry, you saw that I was struggling a bit with the fact that I was unable to create a PGP key without SHA-1. This is a bit tricky, as there are various places where hash functions are used within a pgp key:
1. The key self-signatures and signatures on other keys. Every key has user IDs that are signed with the master key itself. This is to proofe that the names and mail adresses in the key belong to the keyholder itself and can't be replaced my a malicous attacker.
2. The signatures on messages, for example E-Mails.
3. The preference in side the key - this indicates to other people what sigature algorithms you would prefer if they send messages to you.
4. The fingerprint.
1 is controlled by the setting cert-digest-algo in the file gpg.conf (for both self-signatures and signatures to other keys). 2 is controlled by the setting personal-digest-preferences. So you should add these two lines to your gpg.conf, preferrably before you create your own key (if you intend to create one, don't bother if you want to stick with your current one):
personal-digest-preferences SHA256 cert-digest-algo SHA256 3 defaults to SHA256 if you generate your key with a recent GnuPG version. You can check it with gpg --edit-key [your key ID] and then showpref. For 4, I think it can't be changed at all (though I think it doesn't mean a security threat for collission attacks - still it should be changed at some point).
It is also not really trivial to check the used algorithms. For message signatures, if you verify them with gpg -v --verify [filename]. For key signatures, I found no option to do that - but a workaround: Export the key whose signatures you'd like to check gpg --export --armor [key ID] > filename.asc. Then parse the exported file with gpg -vv filename.asc. It'll show you blocks like this:
:signature packet: algo 1, keyid A5880072BBB51E42 version 4, created 1294258192, md5len 0, sigclass 0x13 digest algo 8, begin of digest 3e c3 The digest algo 8 is what you're looking for. 1 means MD5, 2 means SHA1, 8 means SHA256. Other values can be looked up in include/cipher.h in the source code. No, that's not user friendly. But I found no easier way.
The big question remains: Why is this so complicated and why isn't gnupg just defaulting to SHA256? I don't know the answer.
(Please also have a look at this blog entry from Debian about the topic)
Sunday, December 26. 2010
Having used my PGP key 3DBD3B20 for almost eight years, it's finally time for a new one: 4F9F43A9. The old primary key was a 1024 bit DSA key, which had two drawbacks:
1. 1024 bit keys for DLP or factoring based algorithms are considered insecure.
2. It's impossible to set the used hash algorithm to anything beyond SHA-1.
My new key has 4096 bits key size ( 2048 bit is the default of GnuPG since 2.0.13 and should be fairly enough, but I wanted some extra security) and the default hash algorithm preference is SHA-256. I had to make a couple of decisions for my name in the key:
1. I'm usually called Hanno, but my real/official name is Johannes.
2. My surname has a special character (ö) in it, which can be represented as oe.
In my previous keys, I've mixed this. I decided against this for the new key, because both my inofficial prename Hanno and my umlaut-converted surname Boeck are part of my mail adress, so people should still be able to find my key if they're searching for that.
Another decision was the time I wanted my key to be valid. I've decided to give it an expiration date, but a fairly long one: 10 years from now.
I've signed my new key with my old key, so if you've signed my old one, you should be able to verify the new one. I leave it up to you if you decide to sign my new key or if you want to re-new the signing procedure. I'll start from scratch and won't sign any keys I've signed with the old key automatically with the new one. If you want to key-sign with me, you may find me on the 27C3 within the next days.
My old key will be valid for a while, at some time in the future I'll probably revoke it.
Update: I just found out that having a key without SHA-1 is trickier than I thought. The self-signatures were still SHA-1. I could re-do the self-signatures and revoke the old ones, but that'd clutter the key with a lot of useless cruft and as the new key wasn't around long and didn't get any signatures I couldn't get easily again, I decided to start over again: The new key is BBB51E42 and the other one will be revoked.
I'll write another blog entry to document how you can create your own SHA-256 only key.
Friday, December 10. 2010
Yesterday I was at a talk at the FSFE Berlin about free software and GSM. It was an interesting talk and discussion.
Probably most of you know that GSM is the protocol that keeps the large majority of mobile phones running. In the past, only a handful of companies worked with the protocol and according to the talk, even most mobile phone companies don't know much of the internal details, as they usually buy ready-made chips.
Three free software projects work on GSM, OpenBTS and OpenBSC on the server side and OsmocomBB on the client side. What I didn't know yet and think is really remarkable: The Island State of Niue installed a GSM-network based on OpenBTS. The island found no commercial operator, so they installed a free software based and community supported GSM network.
Afterwards, we had a longer discussion about security and privacy implications of GSM. To sum it up, GSM is horribly broken on the security side. It offers no authentication between phones and cells. Also, it's encryption has been broken in the early 90s. There is not much progress in protocol improvements although this is known for a very long time. It's also well known that so-called IMSI-cachers are sold illegally for a few thousand dollars. The only reason GSM is still working at all is basically that those possibilities still cost a few thousands. But cheaper hardware and improvement in free GSM software makes it more likely that those possibilities will have a greater impact in the future (this is only a brief summary and I'm not really in that topic, see Wikipedia for some starting points for more info).
There was a bit of discussion about the question how realistic it is that some "normal user" is threatened by this due to the price of a few thousand dollars for the equipment. I didn't bring this up in the discussion any more, but I remember having seen a talk by a guy from Intel that the tendency is to design generic chips for various protocols that can be GSM, Bluetooth or WLAN purely by software control. Thinking about that, this raises the question of protocol security even more, as it might already be possible to use mainstream computer hardware to do mobile phone wiretapping by just replacing the firmware of a wireless lan card. It almost certainly will be possible within some years.
Another topic that was raised was frequency regulation. Even with free software you wouldn't be able to operate your own GSM network, because you couldn't afford buying a frequency (although it seems to be possible to get a testing license for a limited space, e. g. for technical workshops - the 27C3 will have a GSM test network). I mentioned that there's a chapter in the book "Code" from Lawrence Lessig (available in an updated version here, chapter is "The Regulators of Speech: Distribution" and starts on page 270 in the PDF). The thoughts from Lessing are that frequency regulation was neccessary in the beginning of radio technology, but today, it would be easily possible to design protocols that don't need regulation - they could be auto-regulating, e. g. with a prefix in front of every data package (the way wireless lan works). But the problem with that is that today, frequency usage generates large income for the state - that's completely against the original idea of it, as it's primarily purpose was to keep technology usable.
Thursday, September 9. 2010
In 2008, a rather interesting new kind of security problem within web applications was found called Clickjacking. The idea is rather simple but genious: A webpage from the attacked web application is loaded into an iframe (a way to display a webpage within another webpage), but so small that the user cannot see it. Via javascript, this iframe is always placed below the mouse cursor and a button is focused in the iframe. When the user clicks anywhere on an attackers page, it clicks the button in his webapp causing some action the user didn't want to do.
What makes this vulnerability especially interesting is that it is a vulnerability within protocols and that it was pretty that there would be no easy fix without any changes to existing technology. A possible attempt to circumvent this would be a javascript frame killer code within every web application, but that's far away from being a nice solution (as it makes it neccessary to have javascript code around even if your webapp does not use any javascript at all).
Now, Microsoft suggested a new http header X-FRAME-OPTIONS that can be set to DENY or SAMEORIGIN. DENY means that the webpage sending that header may not be displayed in a frame or iframe at all. SAMEORIGIN means that it may only be referenced from webpages on the same domain name (sidenote: I tend to not like Microsoft and their behaviour on standards and security very much, but in this case there's no reason for that. Although it's not a standard – yet? - this proposal is completely sane and makes sense).
Just recently, Firefox added support, all major other browser already did that before (Opera, Chrome), so we finally have a solution to protect against clickjacking (konqueror does not support it yet and I found no plans for it, which may be a sign for the sad state of konqueror development regarding security features - they're also the only browser not supporting SNI). It's now up to web application developers to use that header. For most of them – if they're not using frames at all - it's probably quite easy, as they can just set the header to DENY all the time. If an app uses frames, it requires a bit more thoughts where to set DENY and where to use SAMEORIGIN.
It would also be nice to have some "official" IETF or W3C standard for it, but as all major browsers agree on that, it's okay to start using it now.
But the main reason I wrote this long introduction: I've set up a little test page where you can check if your browser supports the new header. If it doesn't, you should look for an update.
Tuesday, August 10. 2010
Yesterday I read via twitter that the HP researcher Vinay Deolalikar claimed to have proofen P!=NP. If you never heared about it, the question whether P=PN or not is probably the biggest unsolved problem in computer science and one of the biggest ones in mathematics. It's one of the seven millenium problems that the Clay Mathematics Institute announced in 2000. Only one of them has been solved yet (Poincaré conjecture) and everyone who solves one gets one million dollar for it.
The P/NP-problem is one of the candidates where many have thought that it may never be solved at all and if this result is true, it's a serious sensation. Obviously, that someone claimed to have solved it does not mean that it is solved. Dozends of pages with complex math need to be peer reviewed by other researchers. Even if it's correct, it will take some time until it'll be widely accepted. I'm far away from understanding the math used there, so I cannot comment on it, but it seems Vinay Deolalikar is a serious researcher and has published in the area before, so it's at least promising. As I'm currently working on "provable" cryptography and this has quite some relation to it, I'll try to explain it a bit in simple words and will give some outlook what this may mean for the security of your bank accounts and encrypted emails in the future.
P and NP are problem classes that say how hard it is to solve a problem. Generally speaking, P problems are ones that can be solved rather fast - more exactly, their running time can be expressed as a polynom. NP problems on the other hand are problems where a simple method exists to verify if they are correct but it's still hard to solve them. To give a real-world example: If you have a number of objects and want to put them into a box. Though you don't know if they fit into the box. There's a vast number of possibilitys how to order the objects so they fit into the box, so it may be really hard to find out if it's possible at all. But if you have a solution (all objects are in the box), you can close the lit and easily see that the solution works (I'm not entirely sure on that but I think this is a variant of KNAPSACK). There's another important class of problems and that are NP complete problems. Those are like the "kings" of NP problems, their meaning is that if you have an efficient algorithm for one NP complete problem, you would be able to use that to solve all other NP problems.
NP problems are the basis of cryptography. The most popular public key algorithm, RSA, is based on the factoring problem. Factoring means that you divide a non-prime into a number of primes, for example factoring 6 results in 2*3. It is hard to do factoring on a large number, but if you have two factors, it's easy to check that they are indeed factors of the large number by multiplying them. One big problem with RSA (and pretty much all other cryptographic methods) is that it's possible that a trick exists that nobody has found yet which makes it easy to factorize a large number. Such a trick would undermine the basis of most cryptography used in the internet today, for example https/ssl.
What one would want to see is cryptography that is provable secure. This would mean that one can proove that it's really hard (where "really hard" could be something like "this is not possible with normal computers using the amount of mass in the earth in the lifetime of a human") to break it. With todays math, such proofs are nearly impossible. In math terms, this would be a lower bound for the complexity of a problem.
And that's where the P!=NP proof get's interesting. If it's true that P!=NP then this would mean NP problems are definitely more complex than P problems. So this might be the first breakthrough in defining lower bounds of complexity. I said above that I'm currently working on "proovable" security (with the example of RSA-PSS), but provable in this context means that you have core algorithms that you believe are secure and design your provable cryptographic system around it. Knowing that P!=NP could be the first step in having really "provable secure" algorithms at the heart of cryptography.
I want to stress that it's only a "first step". Up until today, nobody was able to design a useful public key cryptography system around an NP hard problem. Factoring is NP, but (at least as far as we know) it's not NP hard. I haven't covered the whole topic of quantum computers at all, which opens up a whole lot of other questions (for the curious, it's unknown if NP hard problems can be solved with quantum computers).
As a final conclusion, if the upper result is true, this will lead to a whole new aera of cryptographic research - and some of it will very likely end up in your webbrowser within some years.
|