Create value, avoid multiplying errors
Fiber network roll-outs are at a turning point. As operators approach near-complete fiber coverage, competitive dynamics are shifting from expansion to efficiency and differentiation. New ecosystems are emerging, built on distributed data architectures and multi-partner collaboration models. Concepts such as data-as-a-product and AI-driven automation are no longer theoretical – they are becoming strategic imperatives.
But these ambitions rest on a simple truth: any automation is only as powerful as the data it consumes. Without a clean, structured, and validated foundation, advanced analytics and automation will amplify errors rather than create value. This is why data migration and cleansing are not just technical exercises — they are prerequisites for unlocking new revenue streams, operational resilience, and faster ROI. Starting with the foundational capabilities and assuring high-quality data as a single source of truth is therefore the very first step to avoid multiplying the mess. And data sources, particularly when combined, are nevertheless likely to involve imperfections.
It is estimated that as much as 80% of the work involved in current data projects can be taken up with data transformation, cleansing and data migration activities together with establishing new data hygiene workflows. Yet, this could easily lead to a terrible drain of resources and become an operator’s nightmare if not done correctly.
Lessons learnt
We have repeatedly observed how data transformation projects can stall progress if not approached with the right strategy. We have supported operators and utility providers who were struggling with tens of thousands of scattered network infrastructure data units — spread across outdated databases, legacy information systems, and even paper maps. In some cases, we were called in mid-project, after months of work and significant budgets had already been consumed without delivering usable results.
What follows are seven of the most damaging pitfalls we observed through our projects, each a reminder of how ignoring the data itself can derail even the best-designed Newtork Information Systems.
1. Prioritizing the system over the data
Companies often place immense focus on selecting the perfect information system while neglecting the quality of the data that would populate it.
- What was ignored: The fact that a software system might be replaced in 10-20 years, but the underlying data must serve the business for the whole time of the infrastructure lifecycle – 50 years or more.
- The negative impact: This resulted in a state-of-the-art system running on unreliable data, fundamentally compromising the long-term value of the investment.
2. Underestimating the scale of source data pre-processing
Projects faltered by failing to grasp the sheer scale of the source data. The primary challenge was not just the content but managing tens of thousands of unstructured files.
- What was ignored: A process and tools to identify, filter, and acquire only the processing-relevant documents from the entire collection.
- The negative impact: Without the selection process, the data transformation teams worked with outdated or duplicate files. This created an unreliable foundation, making auditing the migration impossible.
3. Neglecting the target system's data model
A critical error occurred when starting data conversion without a thorough understanding of the target system's data model.
- What was ignored: The fundamental data schema and business rules of the new Network Information System (NIS).
- The negative impact: Data incompatible with the new system, forcing costly rework or resulting in a system that couldn't accurately model the network.
4. The pitfall of unsupervised automation
Projects adopted modern AI and off-the-shelf tools, expecting a fully automated data transformation. They relied on batch processing that hid problems until the migration was complete.
- What was ignored: The need for visualization and a human-in-the-loop process where network experts can review and correct the problematic tool's interpretations during data transformation process. Ignored the need for flexible tools to handle different input data standards ("data dialects").
- The negative impact: Without expert oversight, incorrect assumptions and data errors were amplified and propagated throughout the dataset. This created systemic quality issues that were far more difficult to fix after the transformation.
5. The flawed assumption of post-migration data cleansing
The belief that data quality issues could be fixed "later" and not during operations proved to be a critical miscalculation.
- What was ignored: The user expectation that "better no data than wrong data." Once trust is lost, the system is abandoned. Furthermore, operational teams lack the time and resources to fix historical errors on top of their daily work.
- The negative impact: The system was populated with unreliable data, leading to operational errors. The planned "clean-up" phase never happened, leaving the flawed data in place indefinitely.
6. Ignoring updates to source data during the project
Projects treated the source documentation as static, but the legacy files themselves were being updated throughout the months-long data transformation project.
- What was ignored: A process to manage and incorporate ongoing changes to the source documents as the live network evolved.
- The negative impact: The final migrated data did not include the latest updates made during the project, rendering parts of the new system obsolete on day one.
7. Skipping preparations for a "big bang" migration
While a "big bang" migration can be effective, it failed when projects skipped the thorough preparation.
- What was ignored: The necessity of months of meticulous planning and multiple data migration “dress rehearsals” to de-risk the go-live event.
- The negative impact: Without rigorous testing, unforeseen issues caused a complete halt of business operations during the cutover, a failure that proper preparation would have prevented.
Key success factors
Across all our projects, three success factors consistently stand out:
- Meticulous project preparation — a thorough problem analysis and management of customer expectations to prevent the mentioned pitfalls
- Hybrid methodology — combining automation with human oversight to ensure accuracy and overall data quality.
- Workflow design — embedding incident handling and expert validation into the process.
- Project leadership and team organization — experienced project managers who manage timelines, mitigate risks, and deliver results.
Our Interactively Assisted Converter (IAC) embodies these principles. By uniting AI-driven processing, human-in-the-loop validation, and advanced functions like georeferencing, map stitching, and synchronization of inventories, we enable operators to turn fragmented, unreliable physical infrastructure data into a strategic asset.
The outcome is always the same: clean, structured data, a true single source of truth, and the elimination of the “garbage in, garbage out” trap.
You can read more about data migrations and Interactively Assisted Converter here.
Discover how our experts can help you resolve your specific challenges. You can schedule a call with our technical team by filling out the contact form below.