An example of this would be a company that uses .corp in an internal domain name. Under the new gTLD processes, the .corp gTLD could be bought by a different company for their use on the internet. If that happens, when a user tries to go to internal locations on a company network using .corp, there is a chance that they could actually get data back from the now legitimate .corp servers on the Internet.
Using an internal domain name like this is a very common practice among businesses, so any issues that may come up dealing with .corp could be widespread. In the case of these new gTLD’s, the owners of those servers could also manipulate their records, redirecting wayward queries. This opens the door to possible malware or phishing attacks on unsuspecting systems.
However, it’s unlikely companies in this new wave of gTLD registrations will do such a thing. This is an unprecedented change to the Domain Name System that will be monitored under a microscope. With the gTLD distribution of servers growing like this, it now means a hacker would have more servers to attempt to compromise. The root servers and current TLD servers have been very secure and reliable so far. But the new registrants of these gTLDs may have flaws or poor security practices, making it easier for someone to gain access and cause problems.
This isn’t just an issue with anyone using .corp, either. It could affect anyone using internal networks. Why? An internal name you’re using now could one day be registered as a gTLD and cause name collisions for you. Interisle Consulting Group performed a test over 48 hours on the root servers to monitor inbound traffic. Of the total traffic checked, 3% included TLDs that were not registered but soon will be (.corp, .home, .site, .global, etc). And a whopping 19% of traffic was for unregistered gTLDs that ‘could’ be registered in the future (syntactically correct).
Many vendor defaults have systems set to use these (currently unregistered) gTLDs, which is why there is so much of this traffic hitting the root servers. You may not be trying to go to “website.home” in your browser, but that doesn’t mean you don’t have some software or hardware that isn’t trying that in the background. So any gTLD’s that get registered could unknowingly cause some name collisions for certain software and hardware vendors.
ICANN is working on mitigation techniques to try and avoid problems like this. Right now, some of the gTLD requests have a hold placed on them until more investigating can be done (like .corp and .home). There is a good chance some of those more common ones may not be allowed if there is a potential they will cause real problems for the Internet.
Another concern is the root server network. There are currently 13 logical root servers in the Domain Name System, with 377 physical server sites using anycast as of this writing. With the new gTLDs being implemented (up to 1000 a year), this will slowly increase the load on the root servers.
By some projections, the increased traffic will be negligible and not a problem to manage. The main concern with the root servers is the provisioning involved. The current operation and maintenance of the Root servers is a very solid system. Changes on the root DNS servers only happen at the rate of about one a year for each gTLD.