When you read all the succes stories on social media, including LinkedIn, things tend never to go wrong. You only hear the good news. I am guilty as well; my post typically show the progress and succes in projects. Does it mean that things never go pear-shaped? Of course it doesn’t. “Things that can go wrong, will go wrong”. Murphy was a wise man.
Things going wrong is inevitable. It is annoying for the client, frustrating for the supplier, it costs energy, time and money to solve them, and this is particularly true for projects offshore. Depending on the project, the need to re-visit the site to fix a problem may require vessels or helicopters to be hired solely for that purpose. It may prevent vessels or accommodation/work platforms to be moved to the next platform, hence causing delays in other projects. It may force you to use less safe ways to get access to the platform, depending on the situation.
Fortunately, most clients realise that equipment can fail, that things can go wrong and that unexpected action must be taken to make things work properly. It is all in the game. The key point is, however, how quickly and efficient you respond to a system failure. How flexible you are, in order to be able to – for example – take advantage of a weather window early next week (or even tomorrow). Looking at the past 3 weeks, the opposite is also true, thanks to Ciara, Dennis and Ellen and two other unnamed storms that have kept me ashore unwillingly.
I find it very, very discomforting that a system that I have supplied, is not working as it should. I take pride in my work, these systems typically are substantial investments and I aim to solve problems for clients, not to create them. I have seen systems working just fine during the FAT and all of a sudden caused problems during the SAT or even after a successful SAT. In one case an issue was revealed during FAT and required replacing parts during commissioning. And, fair is fair, if you speak of your successes, you should also address things that went wrong. And how you solved it. Balancing things out a bit.
Fortunately, I’ve never experienced problems that resulted in accidents or unsafe situations. Most of the issues were minor, yet unexpected, things that could be solved quickly and without a lot of costs. And in most cases time pressure, turned out to be a key factor.
So, let’s bring it on, here are some things that went wrong with us:
With a project quite some time ago, a third party installed battery boxes for us on a very simple skid, and they did not realise that some of the non-used holes in the bottom of the battery boxes, were actually drainage holes. They plugged them. And I did not notice. Not a big deal, normally, but it is when the battery box is in the splash zone… Result: battery box flooded, batteries dead, system down. And not accessible for repair.
Lesson learned: always check everything twice, just to make sure.
With another project, we had some issues with a remote monitoring system. Not necessarily our fault, more an unfortunate chain of events. The SIM card for satellite communication, to be provided by the client, was available only 1 day before I was going to commission the system offshore, while the SRM equipment was already on the platform. There were some last minute changes in the SRM software as well. Combine this with very tight schedules and deadlines, and you know that you are running a risk. Basically it meant that the SRM system was installed offshore before and without being fully tested.
The result was that the system was not working correctly, was not accessible remotely and we had to replace it 4 months later with a new, fully tested system. After a thorough research it turned out that all hardware was perfectly in order, but that there was a typing error in the software.
Lesson learned: never ever ship something offshore without completing all tests.
The third issue concerned a failure for reasons we did not expect. Quite recently it was noticed from a helicopter, that a PV module was missing on one of our skids. As it was in December, it was important to replace it. When getting to the platform, it turned out that the material we use to protect against galvanic corrossion between PV modules and skid, had deformed to such extent, that the fixings got slack. This caused the PV module to vibrate and untighten the nuts. Eventually, there was so much slack that the freedom of movement of the PV module caused it to be ripped of the skid during a storm. Fortunately, I had a spare PV module in stock, so replacement could be done very quickly.
Lesson learned: although the nuts may be very tight during FAT, conditions offshore are more severe than you might expect. Always make sure that nuts cannot come undone, for whatever reason.
The fourth and final example, concerns an unexpected product failure. In this case, one of our skids had been offshore for over two years without maintenance. We got the report that the system was working irregularly: one week it was working ok, the other week it was down. Based on the problem description, it appeared that the charge controller was going into Low Voltage Disconnect, in which case all loads are disconnected, allowing the PV modules to charge the batteries. Once the batteries are sufficiently charged again, the load is reconnected and the system goes back into normal operation.
Upon inspection, this was indeed the case. One malfunctioning battery was draining the other batteries, resulting in an overal battery voltage below the Low Voltage Disconnect value. A couple of days charging the batteries without load, would get the system voltage into healthy values again, after which the load was reconnected. It then took about a week to drain the batteries to below the Low Voltage Disconnect again. The system got into a Low Voltage Disconnect / Reconnect cycle. The solution: just disconnect the faulty battery, since there is sufficient spare capacity anyway.
Lesson learned: this battery failure was unforseen and unavoidable, one of those situations that just can occur. To realise this is possible, it the lesson in itself.
In all of the above cases, vessels and personnel had to go to the offshore structure outside of the scheduled maintenance trips and consequently have been expensive operations for our clients. Nevertheless, this has not resulted in losing clients, as our clients know this is part of working offshore. What does make the difference, is our dedication to the project, the client and our equipment. Immediate response, offering solutions to solve the problem and being available when it is required, is key.
On the other hand, failing equipment is not common, it is very unusual even. So far I had to replace 1 marine lantern and 1 fog detector. In both cases this was related to a batch of circuit boards with issues. The only real issue with this was that the problems occurred after a successful FAT and after commissioning offshore. The failing lantern was not an urgent matter, it was covered by a backup lantern. The fog detector had to be replaced however, as a failing fog detector will cause all Aids to Navigation equipment to run 24/7, using far more power than what the system is designed for, consequently draining the batteries. This is not the case in summer, but it is in winter (short daylight period and low temperatures have a large impact on the batteries).
Learning from these experiences is also key, in order to avoid (or at least reduce the possibility of re-occurence of) these issues in the future.
But the most valuable lesson: things tend to go pear-shaped in December, when it is cold, windy, rainy, with rough seas, and when the daylight period is very short. Sounds a bit like karma… I am sure good ol’ Mr. Murphy has some interesting comment about that as well.