Awesome article! Interesting related work is in , where we used DNS TTLs as a covert channel for passing data, without needing to control the domain(s) being used. Through the development of that covert channel, we found a variety of idiosyncrasies such in the client-side DNS infrastructure and discussed them in . Some devices will report an erroneously high TTL, some will unnecessarily shorten the TTL, some represent entire clusters of DNS resolvers with interesting properties, and so on. Based on your work, it appears that over the past five years, the number of open resolvers has dropped dramatically, from ~30M to ~3M.
Your email response really is indicative of some of the folks that get cranky when you send them packets :)
I wish I had a more insightful comment, but I'll just say this:
I love posts like this where someone applies a theoretical concept in a fun and interesting (even if not practical) way.
Reminds me of this, one of my all-time favorites:
Guessing date as circa 2003. Could be wrong.
As for DNS, djbdns can store arbitrary bytes in RR (e.g., TXT), as octal. For example, modified dnstxt can print formatted text stored in TXT records, with linefeeds, etc.
Super fun article. I also like to see a "real" implementation of crazy ideas like this.
Can anyone confirm if the Microsoft DNS servers default to caching an unlimited amount of data? The article claims "Unlimited??" as the default for these systems. Eyeballing the pie chart looks like 20% of the servers are running Microsoft, which could provide quite a lot of storage.
An enhancement of this technique could be used on one’s own private network of DNS resolvers for the specific purpose of acting like a highly available directory of private cloud nodes, storing the following information:
encoded in one DNS TXT record per service.
This would kind of be like a mashup of Apple Bonjour and this technique.
The big question is, how long to cache the information for in such a setup, assuming the cloud itself is highly unreliable, so as to make the entire thing extremely fault tolerant?
Too bad he couldn't use FUSE. Would be nice to do `ls` and other commands with this.
While an interesting use, abusing DNS in a similar way has beena long known (15 year) security vulnerability. For example, OzymanDNS. Even then, that was just one of the first published exploits. People had been performing DNS tunneling for some time.
There are detectors of DNS abuse that I imagine the people who actually would store files in DNS would not want pointed at their files.
Wish I had more to add than: "This is so neat!"
Seem's like this would be a good way to circumvent web filters that block remote file services (though allow DNS over tcp or udp).
How would one restrict this capability from an administrative perspective?
Fun article, kudos!
Just a tiny correction: RIPE Atlas' reliability tags (e.g., "-stable-Xd") have nothing to do with the probe "changing the public IP address once a day". Those filter simply measure the probe's uptime over different time windows.
In fact, the "-stable-1d" tag you mentioned would be true even for probes that have been down "up to 2h" over the last day.
You can use the dig utility to see if a DNS server is recursive. Just do the scan in two steps. One major port scan using masscan, netscan, etc., then a smaller scan of the IPs with port 53 open to see if they are recursive or not. You'll see this in dig's output if the server is not recursive:
;; WARNING: recursion requested but not available
I'm surprised at the marketshare dnsmasq has, I would've thought BIND and dnsmasq numbers to be flipped.
Ha! Combine this idea with my proof-of-concept CDN53 Chrome extension and it would be serving websites directly from others' DNS resolvers =:)
Great, article. I've noticed a trend with anything which requires masscan is probably going to be fun/interesting.