Blog

Fire in the Hole: Overcoming Obstacles as a Team

In February, we had a major outage on our Montreal network due to a sudden cable chamber fire. You might not think something like this is particularly dramatic – why can’t we just turn the Internet back on? Well, it usually isn’t that simple, and when a cable chamber is on fire, the situation gets tense. Not only that, but every time an outage happens, it means countless individuals are left stranded without their network, and as their service provider, that’s just not something we can accept. So, to give you some insight into what goes on behind-the-scenes during an outage, let’s take you back to February 16th, 2023. 

Beanfield got a notification early in the morning that a cable chamber fire had impacted our network in downtown Montreal. Our GIS (Geographic Information Systems) and NOC (Network Operations Centre) teams started to investigate, because coincidentally, some network maintenance had been planned at the same time. Our fibre-optic cables had just been spliced, but we had a new problem to deal with; putting out a literal fire on our network. 

Now, cable chamber fires aren’t like a flare-up on your kitchen stove. We can’t deal with them directly. Before we get involved, Hydro Quebec needs to handle the problem by investigating and controlling the blaze. That means all we can do is wait until they arrive, and hope the damage doesn’t get too bad. 

A few hours later, we heard from Hydro Quebec. They had been notified of the incident, and their work was in progress- but they weren’t on site yet. Our GIS team got on the case, compiling a list of affected customers and NOC worked with them to identify which services were hit. 

By 11am, we still had no idea when we would be getting access to the site to start fixing our cables. Our internal teams were communicating and coordinating behind the scenes to make sure we notified our customers of the problem, kept them up to date, and worked on getting them back on our network as fast as possible. It was important to make sure that any critical services, like hospitals or emergency hotlines, were unaffected, or could be given a workaround to keep them online. 

By noon, we figured out that 13 of our cables had likely been impacted by the fire. Each cable can run up to 800 lines of fibre – that’s thousands of potential customers. And it wasn’t just a Beanfield issue- other providers had also been hit. 

Just over an hour later, we got the news – Hydro Quebec was onsite, figuring out the breadth of the damage. We were a step closer to figuring out when we would be given access to the site…or so we thought.  

By 4pm, we still had no ETA. It was clear that this was going to be a long haul and there was no quick fix to this problem. Our Senior Director of OSP started looking for a team of 5 fibre techs from LiteWave. He coordinated with our Director of Fibre Deployment to get the LiteWave team to Montreal to help with fibre splicing when the time came. 

It was 9pm, more than 12 hours since we got the first news of the outage, and we still had no news as to when we’d get access to the site. Hydro Quebec hadn’t restored power and wasn’t letting anybody in until they could complete their work and guarantee everyone’s safety. On our end, we had figured out that 7 cable pulls would be necessary – meaning we would need to replace fibre between cable chambers, and physically thread it from point A to point B. We knew that it was going to take a while to get it all done (probably through the weekend), but we had the team secured for the next morning and were waiting at the starting line, ready to sprint. 

Around 8am on Friday, Feb 17th, 24 hours after we first got the news, we had the all-clear to get to the cable chambers and start working. However, another hurdle stood in our way; freezing rain and snow had completely coated our spool of cable, not to mention the roads between the warehouse and the cable chamber site. Our team pushed through and got there in the early afternoon, and just when we thought we could finally get to work, we found out that Hydro Quebec was still on-site, and we had to stay at the ready until they finally gave us access to the cable chambers. 

Finally, by 5:30pm, we managed to complete our first of seven fibre pulls, thanks to our partners at Telecon who aided in this process. To give you an idea of the undertaking this repair requires, 8 fibre designers in Toronto and Montreal worked around the clock for 20 six-hour rotations to design the cable run that would meet at the main Fibre-Optic Splice Closure. Basically, they had to work overnight to make absolutely sure that the replacement fibre was going in the right place, and that the network would be good-to-go when the cables finally connected point A to B. 

Teams continued to work tirelessly through the weekend, and on Saturday, February 18th at around 2am, we got the news that our last pull was nearly through- just a few more metres. That all sounds great, but it takes a lot more than running cable to get things up and running. Earlier, I mentioned splicing. It’s the process by which two fibre optic lines are connected to allow light to pass through one to the other efficiently. This is a meticulous task that must be done line by line, with a single cable holding, once again, up to 800 lines. At 8am, our teams from Novatel and LiteWave were still splicing fibre. 

Then, it was a matter of sharing the space with the other providers that needed access to their fibre. Other Telcos needed to do their own work on their networks. By Saturday night, things began to wind down, and soon, our network was up and running, good as new. 

So, why tell this story? Well, it’s important to remember what’s actually happening when you lose access to the Internet. It’s frustrating, and deeply inconvenient, but sometimes, it’s also completely out-of-control. Something as simple as a mouse with a peculiar appetite chewing through a cable, or as complex as a cable chamber fire, can leave hundreds without Internet access. But the reason why you lost your connection isn’t important. It’s the effort that goes into bringing it back that’s essential to communicate. 

What’s impressive here is the collaboration that must take place. Between several internal departments, a government agency, and even external forces – other telcos, contractors, etc.- coordination had to be seamless to make sure services were re-established as fast as possible. It was a herculean effort from every single person involved, and a dramatic story that bears repeating. 

We’re endlessly proud of our teams for pulling this off as well as they did. And while the goal is always to avoid outages altogether, it’s comforting to know that, when we need them, we’ve got a team of superheroes at the ready, to save the day. 

Share this article

articles related by: