For the migration of CA-DATACOM, RaincodeLabs has developed the DataKom service, an integrated end-to-end solution to migrate from CA-DATACOM to Unix or Windows platform, including code and data. The migration process is 100% automated, which eliminates the risks induced by manual interventions. The key differentiators for Raincodelabs’s DataKom service are:
• 100% automated migration requires no code freeze during the migration project;
• Delivers well-structured COBOL code, readable and maintainable by any COBOL analyst/programmer and removes the dependency on CA-tools and licenses entirely;
• Guaranteed functional equivalence between the current and target environment;
• “Lean and mean” approach, focused on getting the job done with top-notch specialists, with relevant tools and experience and avoiding all un-necessary overhead;
• Raincodelabs’s track record as trusted partner with excellent and relevant references.
The value proposition of DataKom can be summarized as:
• Integrated: Raincodelabs is the sole vendor for the services, code and data migration tool related to the CA-IDEAL and CA-DATACOM migration, there by optimizing costs and avoiding complex split of .responsibilities;
• Automated: The process is fully automated, to mitigate the risk involved in every manual transformation, even if it only applies to 1 or 2 percents of the system to migrate;
• Comprehensive: DataKom covers more than the basics of a typical CA-IDEAL and CA-DATACOM site; it can deal with some of the more exotic features as well.
DataKom provides all the tools needed to setup a robust and versatile data migration process. While the bulk of the data can be moved with no manual input to complement what is specified in the data dictionary, it allows for ad hoc data normalization and cleansing to be specified and integrated in the data migration process.
Data migration, and more generally, the entire migration project revolves around a comprehensive data dictionary. It can be initialized with the content of the original CA-DATACOM data dictionary, or can be populated by other means if needed. In any case, it is crucial for this data dictionary to be accurate and detailed enough as it controls the entire migration process.
• The structure of the tables, structured in fields and sub-structures
• CA-DATACOM areas and elements
• Data cleansing information
• Mapping information, controlling how CA-DATACOM data map to relational columns
The DataKom toolset includes a comprehensive GUI tool to manage the data dictionary, add and modify tables, fields, indexes, etc.
Data migration is a deliverable, in the form of a fully automated process. Data migration is not about merely moving the data. It is about providing a process that can be run to move the most up to date version of the data at any time.
This process is run repeatedly during the migration project and improved continuously until one is fully satisfied with the results, in terms of data modeling and performance. One can then plan the switch over. Data migration is run once a week, once a day, or even more depending on the project at hand.
The data migration process is improved continuously together with the customer.It allows for ad hoc corrections, for instance to check for a given field in a given table, and convert some value to something else, replacing LOW-VALUE by spaces of vice versa. This is of course an application-level decision, which is taken by the stakeholder and definitely not by us, as we don’t have the required expertise nor authority to judge the relevance of such transformations from the application’s perspective.
The DataKom toolset allows for such corrections to be integrated in the data dictionary, so that they are defined once, and are run automatically as part of the data migration process from there on. DataKom can target DB2, Oracle or SQLServer databases.
Experience seems to indicate that code migration is the riskiest component of a non-trivial migration project. This is where the original behavior must be maintained while the source code is transformed. This is where the true promise of the migration project must be fulfilled, namely to have a system that behaves exactly the same at the end of the process.
CA-IDEAL is the 4GL most commonly associated with CA-DATACOM, and data access statements as supported by CA-IDEAL are of rather high level of abstraction, and can be converted to semantically equivalent embedded SQL statements.
In the context of Datakom’s CA-IDEAL migrations, the source code is converted to strictly equivalent COBOL code. Why COBOL? It simply comes from the fact that IDEAL is at heart a mainframe language, with mainframe data types and mainframe behaviors attached to these data types; and the only commonly available language that provides support for these data types today is COBOL. Aiming at COBOL also allows for a more homogeneous environment to maintain at the end of the migration project, if the system to migrate is not made of IDEAL code only, but also contains some COBOL code to start with.
On the other hand, CA-DATACOM can also be called from within common third generation languages, PL/I and mostly COBOL, with a far more primitive mechanism. Programs then call CA-DATACOM by populating a data structure describing the operation to perform, and passing this data structure to a program named ‘DBNTRY’.
To the non-specialist, translating a piece of software from IDEAL to COBOL (or to any other language for that matter) is similar to compilation. It is an atomic process that converts a system written in IDEAL into a (hopefully) equivalent system in COBOL:
Divide and conquer
While possible in theory, such a monolithic approach (where the translation is performed by a single, probably complex process as opposed to a sequence of smaller, more manageable components) is vastly sub-optimal, because it requires the translation process to deal with all the facilities provided by CA-IDEAL, in all possible combinations.
The translation is divided in 3 steps but totally transparent to the end-user, who only sees the final result of this scaffolding of intermediate transformations.
IDEAL to IDEAL transformation
When dealing with a non-trivial translation, where the source language does not map trivially to the target language, it is preferable to start by applying a number of simplifications to the source system to transform it to a canonical form, thereby reducing the number of constructs and cases to be taken care of by the translation process.
IDEAL to COBOL transformation
The process which ultimately translates a system from IDEAL to COBOL. It must be complete, in the sense of addressing all (or at the very least, a canonical subset thereof) the supported features of IDEAL and providing an equivalent implementation in the target language. It must be monolithic. If it isn’t, if it is to be divided in processes which each address some of the aspects of the translation from IDEAL to COBOL, its intermediate results would be inconsistent. They would be made of a mix of IDEAL and COBOL, which is a less than desirable property.
COBOL to COBOL transformation
Similarly, to the case for the division of the preliminary transformation in a sequence of independent passes, allowing for additional transformation after the migration process is an equally appealing proposition. It allows for division of concern and reusability. Equally importantly, it allows the IDEAL to COBOL phase to remain as simple as possible, since a number of quality improvement processes can now be performed post hoc.
What makes this even more appealing is the fact that the COBOL restructuring facility used at the end of this translation chain is a standard component provided by RaincodeLabs in a number of situations, including PACBASE, COOL:Gen or MetaCOBOL migrations.
DataKom includes a tool that can generate a ‘DBNTRY’ replacement module that will behave exactly like the original except for the fact that ultimately, DB2, SQLServer or Oracle will be used for persistence.This allows application programs written in COBOL and PL/I to be ported as is. They don’t have to be modified; they don’t even have to know that what they are calling is not the true original
CA-DATACOM but something that just behaves exactly like it.
This generated component is database-specific, in the sense that it is made of a number of predefined statically compiled SQL statements, in order to get the best performance possible, significantly better than if they were generated dynamically. When this option is used, the data dictionary must be maintained using the GUI tool that comes with DataKom. Evolutions to the data model can be brought to production by regenerating the ‘DBNTRY’ replacement program.