Early Sunday morning we experienced a partial DNS outage due to routing issues in our Tokyo data center. Here's what happened and what we're doing about it.
A new phishing attack targeting eNom has been identified. Here's what to watch out for.
Three days ago we experienced a partial DNS outage due to SERVFAIL errors. Here's what happened and what we're doing about it.
As an advisory to customers, please watch out for phishing attempts claiming suspension of your domains. Here's what to do if you see something phishy.
On Thursday, May 20th, our Amsterdam data center had a period of downtime due to a Denial of Service attack. Additionally, during the attack we were unable to execute DNS queries outbound from our application servers in Virginia. Here is what we know so far and what we're doing about it.
Last Friday our San Jose region and Redirector service suffered an outage along with our Europe region being under a Denial of Service attack. Here is what happened and what we're doing about it.
Today our redirector suffered an outage and zone changes were delayed. Here's what happened and what we're doing about it.
On Monday, December 1st, 2014, we experienced a major volumetric DDoS attack aimed at a customer's domain. Here is our full report.
On Saturday, September 19th, 2014, we experienced a significant DDoS attack aimed at one of our customer's domains. Read on for the full report.
On Thursday, February 20th, 2014, there was an outage in our Amsterdam data center. This post mortem describes the event, why it happened, what we did to recover, and what are doing to prevent similar issues in the future.
On December 23rd, 2013 we experienced an NTP attack that caused our unicast NS1 server to be taken offline for 24 hours. Triggered by this event we switched almost every one of our customers over to Anycast within a few days.
Several days ago we detected the start of a DNS amplification attack. This attack was nothing special until the morning of June 3rd when it changed in a manner causing an outage for DNSimple name servers.
On May 20th, beginning at about 11:30am Pacific time (18:30 UTC), we experienced a partial service outage. A component failure blocked our customers from updating their DNS records.
Post mortem of the ALIAS record resolution failure on April 6th.
On Friday, March 8th we had a partial DNS outage. Available and well performing DNS is a critical utility for the businesses our customers are running.
From approximately 09:35 UTC until 10:00 UTC, July 23rd 2012, DNSimple experienced an outage across four name servers. This outage appears to be the result of a distributed denial of service (DDoS) attack. Due to the attack we were unable to respond to DNS requests in a timely fashion. Our job processing queue simultaneously stopped due to a failed deployment. The backup in the job queues resulted in new subscriptions timing out and impacted customers updating DNS records.
Today all of our name servers experienced performance degradation that in turn caused customer sites to fail to resolve. This was especially evident to customers who are using our ALIAS record type.