David Dadian is CEO of powersolution.com, a solutions provider/MSP, based in Ho-Ho-Kus, NJ. His practice specializes in business continuity, and he was featured in a case study in this month’s issue of Business Solutions magazine. Not long after my interview with Dadian, his company’s DRBC (disaster recovery and business continuity) service was put to the test following Hurricane Sandy.
I followed up with Dadian recently, and he gave me an update that included some great insights that other VARs and MSPs should heed. Below is a highlight of our conversation.
How did Hurricane Sandy impact your business?
Dadian: We didn’t get hit with any flooding at our headquarters, but we did lose power for a full week. We worked from our DR facilities and never missed a beat. The biggest issue we faced was with our phone systems. It took a day and a half to get our business number forwarded to our cell phones. In hindsight this could have easily been prevented. I wanted to wait as long as possible to set up the call forwarding feature, which we’re able to do online via our VoIP and Internet service provider’s website. When I left work that day, I had planned to activate the call forwarding as soon as I got home. Shortly after arriving at home at 4:15 p.m. we lost power. The next morning, I fired up the generator and tried logging on, but at that point our ISP was fielding so many requests that their web portal was unavailable for several hours. It was an unfortunate situation, but we learned from it, and fortunately none of our customers suffered.
Describe the transition back to work after the power came back on.
Dadian: The power was restored to our headquarters at 11:30 a.m. Monday, November 5. Our Internet and phone service were restored by 12:45 p.m., and we were moved back in and running by 1:30 p.m.
What was the biggest challenge you faced getting your customers back up and running?
Dadian: All of our clients on our Datto DRBC solution suffered minimal downtime as we spun them up locally or in our cloud. The biggest challenges for our customers without power was just that, and those that did not sign on to our DRBC solution requested to make it a priority. Our phones have been ringing off the hook since Sandy, with requests for meetings. In addition to hearing from customers who weren’t previously on a DRBC program, we’re hearing from several new prospects who were let down by their IT service providers.
After going through this experience, what’s your biggest takeaway?
Dadian: We are reviewing our internal disaster recovery policies and procedures, and we’ve already formed an internal committee. The top two issues we’re focusing on are phones and reserve power. Our external policies for our clients are also being revisited and tweaked. This will take approximately 6 months or so to vet out, review, deploy, and schedule dry runs with our clients.