At the Chair for Sys­tems Se­cu­ri­ty, we work on the fol­lowing to­pics:

  • Pro­gram / Bi­na­ry Ana­ly­sis
  • In­tel­li­gent Se­cu­ri­ty Sys­tems
  • Web Pri­va­cy
  • Net­work Se­cu­ri­ty
  • Mo­bi­le Net­work Se­cu­ri­ty

Our re­cent pu­bli­ca­ti­ons pro­vi­de an over­view of the cur­rent work, sour­ce code and data sets for most of our re­se­arch pro­jects are avail­able at If you have ques­ti­ons, plea­se reach out to Prof. Thors­ten Holz or the other mem­bers of the re­se­arch group.


Pro­gram ana­ly­sis de­scri­bes the pro­cess of au­to­ma­ted extrac­tion or in­fe­rence of pro­gram pro­per­ties that allow an ana­lyst to make state­ments about the pro­gram’s be­ha­vi­or, de­sign, or se­cu­ri­ty/sa­fe­ty pro­per­ties. Amongst others, these tech­ni­ques are com­mon­ly em­ploy­ed in tools such as com­pi­lers to fa­ci­li­ta­te ef­fi­ci­ent code ge­ne­ra­ti­on. Fur­ther­mo­re, such tech­ni­ques also find their ap­p­li­ca­ti­on in the back­wards pro­cess as well – if no sour­ce code is avail­able, the bi­na­ry re­pre­sen­ta­ti­on of a pro­gram can be (ma­nual­ly or au­to­ma­ti­cal­ly) re­ver­se en­gi­nee­red in order to ob­tain a hig­her-le­vel re­pre­sen­ta­ti­on again. We apply such tech­ni­ques to eit­her find and ex­ploit vul­nerabi­li­ties (e.g., via tech­ni­ques such as fuzz tes­ting or code-reu­se at­tacks) or to de­ve­lop de­fen­ses (e.g., tech­ni­ques such as con­trol-flow in­te­gri­ty or ran­do­miza­t­i­on). The tech­ni­ques de­ve­lo­ped by us can ty­pi­cal­ly be ap­p­lied on the bi­na­ry level such that no ac­cess to sour­ce code is nee­ded. In our re­se­arch, we cover the fol­lowing to­pics:

– Re­ver­se En­gi­nee­ring
– Bi­na­ry Ana­ly­sis
– Com­pi­lers
– Code Ob­fu­s­ca­ti­on
– Ab­stract In­ter­pre­ta­ti­on
– Fuz­zing

– Pro­gram Syn­the­sis
– Model Che­cking (MC)
– Sym­bo­lic Exe­cu­ti­on (SE)
– Sa­tis­fia­bi­li­ty Mo­du­lo Theo­ries (SMT)
– Firm­ware Re-Hos­ting / Emu­la­ti­on
– Con­trol-Flow In­te­gri­ty or Ran­do­miza­t­i­on

Selec­ted Pu­bli­ca­ti­ons

– „IJON: Ex­plo­ring Deep State Spaces via Fuz­zing“ (IEEE S&P’20)
– „Gri­moire: Syn­the­si­zing Struc­tu­re while Fuz­zing“ (USE­NIX Se­cu­ri­ty’19)
– „Nau­ti­lus: Fis­hing for Deep Bugs with Gram­mars“ (NDSS’19)
– „Red­queen: Fuz­zing with In­put-to-Sta­te Cor­re­spon­dence“ (NDSS’19)
– „kAFL: Hard­ware-As­sis­ted Feed­back Fuz­zing for OS Ker­nels“ (USE­NIX Se­cu­ri­ty’17)
– „Eth­B­MC: A Boun­ded Model Che­cker for Smart Contracts“ (USE­NIX Se­cu­ri­ty’20)

– „Syn­tia: Syn­the­si­zing the Se­man­ti­cs of Ob­fu­s­ca­ted Code“ (USE­NIX Se­cu­ri­ty’17)
– „Re­ver­se En­gi­nee­ring x86 Pro­ces­sor Micro­code“ (USE­NIX Se­cu­ri­ty’17)
– „How They Did It: An Ana­ly­sis of Emis­si­on De­feat De­vices in Mo­dern Au­to­mo­bi­les“ (IEEE S&P’17)
– „Coun­ter­feit Ob­ject-ori­en­ted Pro­gramming: On the Dif­fi­cul­ty of Preven­ting Code Reuse At­tacks in C++ Ap­p­li­ca­ti­ons“ (IEEE S&P’15)
– Marx: Un­co­ver­ing Class Hier­ar­chies in C++ Pro­grams (NDSS’17)
If you are in­te­rested in wor­king on these to­pics, feel free to con­tact:

– Mo­ritz Schlö­gel
– To­bi­as Schar­now­ski
– Nils Bars
– Lukas Bern­hard
Nico Schiller


Sys­tems based on ma­chi­ne le­arning (ML) are in­crea­sin­gly used in se­cu­ri­ty and sa­fe­ty cri­ti­cal do­mains such as au­to­no­mous dri­ving and thre­at de­tec­tion. The un­der­ly­ing al­go­rith­ms, howe­ver, were not de­ve­lo­ped with se­cu­ri­ty in mind and are vul­nerable to tar­ge­ted at­tacks. In this re­se­arch area, we in­ves­ti­ga­te the of­fen­si­ve and de­fen­si­ve as­pects of these at­tacks and stri­ve to im­pro­ve the ro­bust­ness of ma­chi­ne le­arning in ad­ver­sa­ri­al set­tings. Mo­re­over, ma­chi­ne le­arning has crea­ted im­pres­si­ve re­sults in areas such as na­tu­ral lan­gua­ge pro­ces­sing, image pro­ces­sing or play­ing games (such as Chess, Go, and Dota). Sur­pri­sin­gly, this has not (yet) been re­pli­ca­ted for se­cu­ri­ty. Ma­chi­ne le­arning pro­vi­des new tools that allow us to re­think exis­ting ap­proa­ches and tar­get pre­vious­ly un­at­tainable tasks. These ad­van­ce­ments re­qui­re an in­te­gra­ti­on of se­cu­ri­ty and ma­chi­ne le­arning. We en­vi­si­on this in­ter­play to take the form of a two-pron­ged ap­proach. On the one hand, we need to adapt ma­chi­ne le­arning tech­ni­ques to co­ope­ra­te with exis­ting tools, with the goal to make pre­dic­tions based on their pro­du­ced data. On the other hand, exis­ting tools need to be aug­men­ted by ma­chi­ne le­arning tech­ni­ques to in­ter­act with human ex­perts, in order to ac­ce­le­ra­te ma­nual pro­ces­ses and pro­vi­de au­to­ma­tic de­ci­si­ons. Our re­se­arch co­vers the fol­lowing to­pics:

– Data Poi­so­n­ing At­tacks
– Eva­si­on At­tacks with Ad­ver­sa­ri­al Ex­am­ples
– Model Ste­aling At­tacks

– Ex­plaina­bi­li­ty and Trans­pa­ren­cy of ML Al­go­rith­ms
– Ge­ne­ra­ti­ve Ad­ver­sa­ri­al Net­works (GANs)
– Ma­chi­ne Le­arning for Se­cu­ri­ty

Selec­ted Pu­bli­ca­ti­ons

– „Le­ver­aging Fre­quen­cy Ana­ly­sis for Deep Fake Image Re­co­gni­ti­on“ (ICML’20)
– „Im­pe­rio: Ro­bust Over-the-Air Ad­ver­sa­ri­al Ex­am­ples Against Au­to­ma­tic Speech Re­co­gni­ti­on Sys­tems“ (arXiv:1908.01551)

– „Ad­ver­sa­ri­al At­tacks Against Au­to­ma­tic Speech Re­co­gni­ti­on Sys­tems via Psy­choa­coustic Hiding“ (NDSS’19)
If you are in­te­rested in wor­king on these to­pics, feel free to con­tact:

– Thors­ten Ei­sen­ho­fer
– Joel Frank


Web­sites, apps, IoT de­vices, and busi­ness in ge­ne­ral today hea­vi­ly rely on per­so­nal data to tailor their ser­vices to the user’s pre­fe­ren­ces, in­te­gra­te so­ci­al media sharing, or make money through tar­ge­ted ad­ver­ti­sing. Due to the com­ple­xi­ty of the data pro­ces­sing eco­sys­tem – which often in­vol­ves va­rious par­ties and mul­ti­ple ju­ris­dic­tions – it is often hard for users to un­der­stand and con­trol what per­so­nal data is collec­ted by whom and why. This has led re­gu­la­tors across the world to crea­te new pri­va­cy laws re­stric­ting cer­tain prac­tices, ma­king others more trans­pa­rent, and pro­vi­de “data sub­jects” with new rights re­gar­ding their per­so­nal data. We study va­rious as­pects of data collec­tion prac­tices, their me­cha­nis­ms to meet the legal re­qui­re­ments, and how users per­cei­ve both these tra­cking sys­tems and com­pli­an­ce me­cha­nis­ms. We cover the fol­lowing to­pics:

– Web tra­cking
– Pro­filing
– Com­pli­an­ce with legal re­qui­re­ments (e.g., GDPR)
– Pri­va­cy po­li­cies

– Con­sent me­cha­nis­ms (e.g., “cook­ie ban­ners”)
– Pri­va­cy by de­sign and pri­va­cy by de­fault
– … and many other as­pects of usa­ble pri­va­cy, data pro­tec­tion, and sur­veil­lan­ce.

Selec­ted Pu­bli­ca­ti­ons

– „(Un)in­for­med Con­sent: Stu­dy­ing GDPR Con­sent No­ti­ces in the Field“ (CCS’19)
– „Me­a­su­ring the Im­pact of the GDPR on Data Sharing in Ad Net­works“ (ASIACCS’20)

– „We Value Your Pri­va­cy … Now Take Some Cook­ies: Me­a­su­ring the GDPR’s Im­pact on Web Pri­va­cy“ (NDSS’19)
– „“Your Has­hed IP Ad­dress: Ub­u­ntu“ – Per­spec­tives on Trans­pa­ren­cy Tools for On­line Ad­ver­ti­sing“ (ACSAC’19)
If you are in­te­rested in wor­king on these to­pics, feel free to con­tact:

– Chris­ti­ne Utz

Samuel Domiks


In this re­se­arch area, we me­a­su­re net­work se­cu­ri­ty as­pects of lar­ge-sca­le da­ta­sets like the de­tec­tion of phis­hing-re­le­vant do­mains in newly re­gis­te­red do­mains, di­stri­bu­ted de­ni­al-of-ser­vice at­tacks on the In­ter­net, and si­mi­lar events. For that pur­po­se, we often collect data to ana­ly­ze pre­vious­ly over­look­ed is­su­es, e.g., me­a­su­ring the net­work time syn­chro­niza­t­i­on eco­sys­tem or ana­ly­zing wron­gly con­fi­gu­red de­vices con­nec­ted to the In­ter­net. We are in par­ti­cu­lar in­te­rested in So­ci­al Net­work Ana­ly­sis (e.g., Face­book), the ana­ly­sis of in­fra­struc­tu­re pro­to­cols of the In­ter­net (e.g., the Do­main Name Sys­tem), and the ana­ly­sis of at­tack vec­tors like phis­hing and scamming. Amongst other to­pics, we work in the fol­lowing areas:

– So­ci­al Net­work Se­cu­ri­ty and Pri­va­cy as­pects
– In­fra­struc­tu­re Pro­to­cols (e.g., DNS, NTP, IP)
– Thre­at Land­scapes (APTs, OSINT, Black­lists, etc.)

– Do­main names
– Ho­ney­po­ts

Selec­ted Pu­bli­ca­ti­ons

– „On Using Ap­p­li­ca­ti­on-Lay­er Midd­le­box Pro­to­cols for Pee­king Be­hind NAT Gate­ways“ (NDSS’20)
– „Bey­ond the Front Page: Me­a­su­ring Third Party Dy­na­mics in the Field“ (Web Con­fe­rence 2020)

– „Mas­ters of Time: An Over­view of the NTP Eco­sys­tem“ (EuroS&P’18)
– „No Honor Among Thie­ves: A Lar­ge-Sca­le Ana­ly­sis of Ma­li­cious Web Shells“ (Web Con­fe­rence 2016)
If you are in­te­rested in wor­king on these to­pics, feel free to con­tact:

 Teemu Ry­ti­lah­ti

– Den­nis Tatang


If you are watching vi­de­os, brow­sing Ins­ta­gram, or chat­ting with your fri­ends — mo­bi­le net­works con­nect you to the In­ter­net ne­ar­ly ever­yw­he­re. They are quite dif­fe­rent from your home WiFi: the large in­fra­struc­tu­re with thousands of base sta­ti­ons, SIM cards, in­ter­na­tio­nal roa­ming, and bil­ling all bring their own uni­que chal­len­ges. You might think that such a cri­ti­cal in­fra­struc­tu­re is well tested, but in fact, many of to­days tools for soft­ware tes­ting will not work with tel­e­com net­works yet. If you are in­te­rested in chan­ging this, work with us and

– bring pen­tes­ting to telco net­works,
– find bugs in na­ti­on-wi­de in­fra­struc­tu­re, and

– ex­ploit over-the-air vul­nerabi­li­ties in smart­pho­nes.

Selec­ted Pu­bli­ca­ti­ons

– „Brea­king LTE on Layer Two“ (IEEE S&P’19)
– „Call Me Maybe: Ea­ves­drop­ping En­cryp­ted LTE Calls With Re­VoL­TE“ (USE­NIX Se­cu­ri­ty’20)
– „On Se­cu­ri­ty Re­se­arch Towards Fu­ture Mo­bi­le Net­work Ge­ne­ra­ti­ons“ (IEEE Com­mun. Surv. Tu­to­ri­als)

– „IM­P4GT: IM­Per­so­na­ti­on At­tacks in 4G NeT­works“ (NDSS’20)
– „LTE Se­cu­ri­ty Disa­b­led—Mis­con­fi­gu­ra­ti­on in Com­mer­ci­al Net­works“ (WiSec’19)
– „Lost Traf­fic En­cryp­ti­on: Fin­ger­prin­ting LTE/4G Traf­fic on Layer Two“ (WiSec’19)
If you are in­te­rested in wor­king on these to­pics, feel free to con­tact:

– Mer­lin Chlos­ta
– David Rupp­recht