Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Waited for this to fall off the front page

For my personal hobby sites I change things so rarely that I can keep my TTL's very high, especially for NS records and most TXT records. There are some records I could theoretically set to 68 years though obviously nobody would keep a record that long. Going against the grain I also keep neg-ttl negative cache really high to help spot the bots that ignore neg-ttl. It's a hobby of mine to study bots and what they are enumerating. Sometimes it gives me a jump start on zero-day vulnerabilities. For example a number of bots suddenly start looking for a specific A record such as cpanel to use an old and silly example.

Non bot clients will respect TTL's within reason. Most ISP recursive DNS servers will cap the TTL to 24 hours for NS records and sometimes higher for A, CNAME, PTR, etc... Most corporate DNS servers will be close to the defaults of whatever recursive daemon they are using, usually Active Directory, sometimes Bind. The remaining limiting factor is memory but most recursive DNS servers these days have obscene amounts of RAM and CPU time that the DNS admin may allocate. There are ways to even further optimize recursive servers such as periodically flushing junk zones via cron and many other things that do not need to be there as well as tuning slabs and threads based on core count. This will vary by organization and requires getting detailed zone and client statistics. An example of a junk zone would be a zone used by a corporate spy to exfiltrate data over DNS that will result in hundreds of thousands of unique A records data-flow-outbound customer or intellectual property data or TXT records inbound that are actually encrypted data data-flow-inbound encrypted malware, instrucitons.

Bot scripts will bypass recursive DNS servers and will typically talk directly to authoritative servers. Most of these scripts do not even look at zone or resource record TTL times which is unfortunate for them as it makes spotting them trivial. I then dig deeper into the networks they are originating from, find their IPv4/IPv6 CIDR blocks, who they peer with, what business they claim to be. Sometimes they are squatters that are announcing routes from businesses that went under and had laid off the people that would have released the IP allocations. I help get those clawed back when I can, taking the IP allocations away from the squatters.

Side project idea for Google / Cloudflare / OpenDNS / etc... These companies have a unique view into DNS traffic and could also help spot the squatters. They could free up a significant amount of IP space if they allocated a small team of interns to analyze their traffic, use automation to build graphs and reports, then once confidence is high enough submit this data to all the IP registries. Registries are also much more likely to take communication from these companies far more seriously than some retired hobbyist.

This will probably come up but some may say having high TTL's is risky. It can be, but not for me. If all of the internet shared one massive /etc/hosts and DNS ceased to exist, it would rarely have updates from me and if a record is stale there would be no harm, no foul as it pertains to me. There are a myriad of other reasons I have unusually high zone/record TTL's but it would turn into a blog post and I have been too lazy as of late to make any as interest is usually very low. I don't even have my blog VM's spun up.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: