A blog about computer science and a life between Heidelberg and Munich

This semester has been a lot of work. At our university we did a pratica offered by the chair of IT Security which was about the setup and operation of honeypots. In fact it was more than just one single honeypot to work with - we had a whole subnet to play around. Summed up it was a honeynet with different teams owning several machines, offering various services and analyzing data on different layers. Because it was quite a lot of work I decided to share most of my stuff here. Maybe there is someone out there working on the same topic and in search for some experiences.

How to take a look at bad guys?

The biggest problem of honeypots is the logging and tracing of crackers and script kiddies. It is very hard to fake services so that nobody can actually say whether he is using a real service or a faked one. There are always some minor things to determine a faked service. Those details are similar to the details on which OS fingerprinting is based on. Small technical issues can reveal the whole honeypot or even worse the whole network which is uses to detect the newest attacks and scams. So at university we decided to ignore all those thoughts and simply implement real services that actually did the same like their colleagues in production areas. If there is no difference from real services there is no way to find out that those services are monitored very closely and are a trap to catch bad guys.

The next question we asked ourselves was about the mentioned monitoring of those services and the underlying operation system. We of course knew how all the services were configured because it was part of our job to do exactly this. But we also had to make sure that we saw every request made to a service and the corresponding result. And yet more important changes made to the configuration of files on the host and/or services directly.

The first part was quite easy to solve as we decided to avoid any encryption in terms of network traffic. Luckily we had the possibility to use a mirror-port on our local switch and therefore had pcap files of every single packet which ever left or entered our network. So even if there had been a malicious network stream we would be able to at least reconstruct it and retrace the effects those packets had on our services. As I said we had a full backup of every single machine within the honeynet. This strategy worked quite well for tons of services like ftp, http, irc and so on. But what to do with the ssh daemon?

Secure shell daemon from hell - never trust a sshd

We first did some brainstorming about how to see what was going on if an attacker has successfully entered one of our machines through the secure shell daemon. We thought about things like using a null cipher and looking at the pcaps or simply open up telnet. But all of this would have been much to obvious to the bad guys. After a few hours we started with writing a patch to at least see all the passwords submitted to the secure shell daemon. In the end this really was a very interesting and important source of data but to be honest at the time of writing the patch we were not aware of that. This was a nice way to create statistical data to questions like “Which is the most tried password of root?” or “How much unique passwords were tried to brute force our machines?”

(opensshd.patch) download
--- auth-passwd.c       2012-07-01 22:15:25.000000000 +0200
+++   2012-07-01 22:15:53.000000000 +0200
@@ -112,8 +112,11 @@
 #ifdef USE_PAM
-       if (options.use_pam)
+       if (options.use_pam) {
+               logit("user,password,ip,port=%s,%s,%s,%d",
+                       authctxt->user, password, get_remote_ipaddr(), get_remote_port());
                return (sshpam_auth_passwd(authctxt, password) && ok);
+       }
 #if defined(USE_SHADOW) && defined(HAS_SHADOW_EXPIRE)
        if (!expire_checked) {

It was surprisingly easy to add a small line of code to get sshd to log all passwords he is seeing to directly to syslog. If you use our patch please be aware that this will log all passwords whether they are valid for login or not. You can easily modify the patch to log only passwords if the result of pam.d is negative. It is also possible to log a key to syslog or in a dedicated log file or if you add some more code just move the data through the network to another host and save it there. There are plenty of possibilities - just be creative!

The syslog-daemon and his marriage with different services

We used syslog-ng as our syslog-daemon and tried to build a configuration which did exactly the same as the common syslog-daemon in our Linux distribution. This means that all the messages must be appended on specific files within the /var/log/ directory. Additional messages like the passwords were not logged on our honeypot. One of the great things of syslog-ng is that it is so easy to mirror messages and send them to remote destinations where another syslog is listening. This remote syslog instance wrote all the messages to a mySQL database which was really handy to analyze afterwards. Due to this even those messages that weren’t stored locally were written to the database.

If you have more time I suggest that you do a few changes to the syslog-ng daemon and try to hard-code his configuration within the source. Of course one could think of adding a covered channel to send log messages to the network. Our simple solution can easily be discovered with netstat. As we had limited resources during the semester and only a few weeks to prepare our honeypot we did not modify the source of syslog-ng, but it is still worth thinking about those issues when running a honeypot. Another weak point of this setup was the lack of signatures as fake messages were possible without any possibility for us to recognize them. But as I said this project was no scientific long-term stuff but a place to earn first experiences with honeypots and the topics related to them.

We were surprised by the problems some services had to log to syslog. For example mySQL was able to do that, but there weren’t many abilities to configure the amount or quality of the logging. If you’re interested about this topic we can suggest an discussion on to tame our Apache2. On the other hand services like proftpd were easy to integrate and even had the possibility to log to a mySQL database parallel. By using accounts with restricted rights like INSERT only the possibilities for attackers were limited. We also had a very, very restricted access to our log-server which were only accessible from the honeypots within our network and of course only on two ports. And even those ports were monitored closely so that no invalid packet like one without SYN flag could only pass if there was an established connection from the sender.

Of course the only to really make sure that those services are logging correctly would have been to hard-code the configurations within the source and to check if they were modified by using hash values and check them regularly. We did a lighter version of this as we calculated the hashes of the configuration files and the binaries and compared them with our hashes we made before the honeypot was connected to the internet. And we must admit that this stuff worked out quite well and we were able to detect modifications very efficient.

Bash up the bash - a quick & dirty solution

Okay, this topic will be very basic but even the more sophisticated guys with rootkits on our machines did not recognize this. So it is worth to talk about it, even if it is too simple to really call it an observation technique. Science Bash version 4 the shell supports logging to syslog natively and this also may the best solution and if this is hardcoded an attacker has to use his own Bash or replace your one. But if you do not have the time or simply want to determine how clever your attacker is you can try adding stuff like this:

“a bash trap”
function bashthebash
   declare COMMAND
   COMMAND=$(fc -ln -0)
   logger -p -t bash -i -- "${USER}:${COMMAND}"
trap bashthebash DEBUG

Add this code fragment to your local profile or for global usage to /etc/profiles and watch the log getting filled. As I said it is too easy and simple to recognize as none of the guys we saw had a look into bash configuration outside of their home directory. A smarter attacker would have found out quite easily but maybe we only saw the dumber ones…

There is a lot more stuff to tell, so I spited the articles and will talk about our underlying level of observation techniques in the next blog entry. Don’t worry - It will also discuss a more detailed, harder to detect and more secure observation method of the shell.

posted in: Linux, bash-trap, honeypot, security, university