The history of major IT integration programs is littered with tales of grand plans undone by data migration disasters. It’s said that 70 percent of all major IT integration programs fail to deliver on time, within budget, or at all. Usually failure results because of something to do with data. It’s either too disparate, too inaccurate, too misaligned, too voluminous, or all of the above. Clever approaches to data model mapping and cleansing have helped in some cases, but the industry has long sought an effective antidote to the data migration poison pill. Semantic search technology might be it.
Outside the Engineering Curriculum
Engineers are awesome at creating sophisticated and robust solutions to really big problems. Sometimes, though, they’re rooted in time-tested methodologies that allow old problems to fester. Data migration is one of those problems. Think about it – You’re a major CSP and your strategic IT goal is to reduce cost and increase speed and efficiency. You intend to achieve this through a major IT consolidation or transformation. After an assessment, the engineers determine that in order to consolidate and transform systems and processes, they’ll need to reconcile all of their disparate data sources. Without a common data store, common nomenclature, and strong data quality, they argue, it will be nearly impossible to create new, seamless, fully-automated processes end-to-end. So, the data migration, integration, cleansing, and reconciliation adventure begins. It is difficult, fraught with risk, and often ends in failure…or becomes a sunk cost nightmare that never ends at all.
What’s the problem in this scenario? Too much time and effort are spent on trying to fight with the data when the higher level goals were to reduce cost and increase speed and efficiency. The brilliant engineers are trained, through an age-old curriculum, to break big goals into distinct problems and solve them. Over the years, enough time has been spent attacking data migration and quality problems that certain idealistic methodologies have entered the curriculum. These traditional, albeit somewhat ineffective, methods are taught, and as a result, their tendency to be ineffective perpetuates, even as new tools are created to help them fail less often. (It doesn’t hurt either that these methodologies have generated billions in billables for systems integrators…)
Enter the Hackers
Benedict Enweani and his pals from Orchestream kept running into these data migration and integration problems as they implemented their solution (which ultimately was acquired by MetaSolv/Oracle). For their next start-up, Ontology, they chose to tackle the problem head on. Instead of trying to refine or improve the old methodology, these telco-born engineers took a page out of the “hacker world.” They took a hard look at search engines. They recognized that millions of person hours had already gone into refining various approaches to semantic search and knowledge graphing (Google anyone?). They realized that search engines excel at sifting through enormous amounts of data and returning results that a user can mentally digest. So, they applied search technology to data integration and migration. The results have been notable and consistent.
Three Makes a Trend
Perhaps the most remarkable aspect of Ontology’s data migration work at T-Mobile Czech Republic is that it’s done. In the five month time frame it promised, and within the budget it stated up front, Ontology reconciled and migrated several databases and a whole bunch of Excel files into a common Amdocs OSS network inventory database. For the uninitiated, keep in mind that this kind of effort would usually take 18 months – at a minimum – to pull off and is almost never done on time or within budget, if it’s achieved at all.
To quote Apollo Creed, one might say Ontology was just a “one-time-lucky-punk.” But, just as Creed was wrong about Balboa, you’d be wrong too.
Ontology demonstrated semantic search’s effectiveness for data migration at Telecom South Africa. The operator has an ambitions master data management (MDM) effort going on across all of its lines of business. It’s creating multiple data hubs – for location and physical network data, a customer hub, a unified OSS/BSS hub, and a resource hub, among others. Ontology, reports Enweani, has just delivered a “correlation capability for the resource hub” on time and “under budget.” In this case, the MDM infrastructure is living and breathing, so Ontology’s semantic search system system will remain in place as the “correlation component after the MDM hubs are live,” explains Enweani, in order to “keep pulling in data and keep it right.”
Okay you say, two on time and under-budget successes are pretty good, but two doesn’t make a trend. So, here’s number three.
Telenor Denmark had acquired an IP network, a mobile network, and a fixed network and had common customers running across all three. It needed visibility into those commonalities. More importantly, the operator needed to execute change management and be able to notify customers when and how they would be affected. Initially, a team of 20 people took two weeks to determine which customers and SLAs would be impacted if they pulled something down for maintenance and what they’d actually have to measure to track the impact. The Ontology search engine has since been applied to the same process. It searches data across nine OSS and BSS databases and returns better results nearly instantaneously than the 20 person team had delivered in a fortnight.
What Does it Look Like?
So, you have to ask – how does it do that? Enweani admits that out of the box, the search results make sense to a trained professional who is familiar with the operator’s environment and the disparate data and information models (naming conventions, node IDs, etc.) across their systems. In other words, the engineers are able to pull all of the data to them immediately that they’d otherwise spend weeks searching for or writing complex SQL queries to compile. Over time, Ontology has to work with the client to take the types of data these searches return and translate them into results less experienced users can understand. In iterations, they use the same kinds of knowledge graph technologies that Google and Facebook use to achieve that.
Why Does It Matter?
If you look back, it’s pretty rare that I’ll cover a vendor product in this fashion (and no, Ontology isn’t paying us to do it either). Having worked in the systems integration business and co-authored two books on how painful OSS/BSS transformations can be, Ontology’s release about its project at T-Mobile Czech Republic caught my eye. My subsequent conversation with Enweani told me that these guys bear watching. There’s little question that data cleansing, reconciliation, migration, and consolidation are typically the most difficult, costly, risk laden, and failure prone activities that OSS/BSS integration programs face. The cost and failure rate are massive deterrents for risk averse and budget-wise operators. If semantic search can dull or eliminate that pain, it could be a game changer for OSS/BSS transformation. Let’s see if Ontology can continue to deliver and for increasingly larger project scopes.