Hi! I'm currently doing my diploma work. its about detecting HTTP-DoS Attacks on web servers. it tries to detect this attacks based on statistical anomalies of the requests. it looks at the request rate (with an exponential moving average), the period in which a client is active, the time distribution of the requests (chi square), the return status of the web server and the distribution of the URI's hit. till now, it looks pretty promising but one very important factor is unaccounted, the coast of a single request! what i like to do is calculating a coast for each request, so that request that charge the CPU more or demand lots of bandwidth, have a higher suspicious level. the hole analysis process is done on the logfile of a web server. to train the system old logfiles are feed. after that the system switches to live mode and watches the access logfile (like tail -f) for changes and tries to detect attacks in "real time" (with the possibility to block them via an apache module or the firewall (time based rule)). some companies in switzerland sponsored their logfiles to develop the system. the bad thing, non of this logfiles contains the time to handle the requests! now my question, is somebody willing to give me some log data in combined format but with the time to handle a request. apache2 would log this with the "%D" parameter in microseconds. logfiles with about 100'000 - x'000'000 requests would be optimal. you can anonimize your logdata with http://sourceforge.net/projects/anonlog/ if you like or have to. an other thing which would be very interesting is if somebody has logfiles where script kiddies tried to bring down a web server with lots of requests to cpu intensive dynamic pages or got attacked in an other way. if you think i'm running in the wrong direction with this project or have other things on your mind which i should consider, please let my know! regards markus roth