Support for every task associated with a complex data migration from mapping over execution
to issue tracking.

migFx - rich and deep functionality

- a complete solution

The Studio lets you build Business Objects, map data elements and assign rules. Based on the mapping it is turned into code by the Engine, while the Director lets the user specify which code to execute, when and where. Finally, all results can be viewed, analysed and tracked using the Tracker.

Key migFx features:

Simple stepwise process

Run and Rerun​

The entire migration takes place in a separate environment – no coupling to Source or Target System

You are free to iterate to your heart's content without any concerns

You first offload to Target System when satisfied with the quality of the data transformed

Use an intermediary copy for testing in Target System while you continue iteration and react instantly to issues from the testing

Trace Issues

The web application provide easy access for all stakeholders

Easy and integrated user involvement and collaboration

Access to all events helps trace issues in a structured way

You have all information at hand and all users have the same reference and use the same terms

A profound shift in thinking and process

You can iterate on event codes, on keys etc. Combined with the easy and fast iteration cycle you will find yourself iterating many times daily

Run a cycle even for minute little issues

Continuously building quality to a higher level than you imagined possible

Build on Business Logic

Focus on Business Logic

Structured added to data

Common reference for tracking

Well documented and auditable

Natural grouping of tasks

Facilitates fine grained iteration

UI and Structure

Business Objects are the basis for the mapping, for the actual iterations of the migration executions and for the surfacing of the migration iteration results and events. When executing, every business object passes through the migration steps as one unit.

The Studio lets you build Business Objects, map data elements and assign rules. The mapping is turned into code by the Engine, while the Director lets the user specify which code to execute, when and where. Finally, all results can be viewed, analysed and tracked using the Tracker.  

migFx - UI and structure

migFx components

migFx consists of a set of linked and collaborating components. Each component is a feature-rich application and the combined suite of components provide a complete foundation and support for all aspects of the data migration process.


migFx Studio

The Studio is a Windows productivity application used to create the mapping. The Studio supports and enforces highly structured specifications.

In addition, the Studio contains extensive cross-referencing and reporting functionality, significantly improving the overall understanding and overview of the mapping.

Finally, the Studio validates the mapping, clearly reporting any errors and inconsistencies that in turn would cause incorrect or invalid data migration results.

A fundamental element in any data migration scenario is the way it is specified how to migrate the source data to the target data.

The success of any data migration is directly linked to the quality of this specification and how it is translated into the executable that performs the actual data migration. This is only underlined by the fact that this specification often grows to enormous size and complexity. It is difficult to maintain validity and coherence in the specification itself. Most importantly: In many cases it proves impossible maintain complete fidelity in the consistency between the specification and the executable.

A key component in migFx is the Studio. This is a dedicated multiuser productivity application providing a complete and consistent interface to produce the mapping. Using the Studio, a team of users can collaborate to produce the mapping for the framework executables.

The Studio contains rich cross-reference and cross-validation functionality to ensure a very high degree of consistency and coherence in the mapping. Most importantly, the Studio enforces mapping of an extremely structured nature. In fact the specifications are so structured that they serve as input to a code generator that generates the migration executables.

It is a key quality of migFx that the consistency between the mapping and the actual executable is inherently guaranteed.

In the case of repeated data migrations from varying source systems to the same target system it is crucial that a clear separation exists between the mapping for source data and the mapping for target data.

Using the Studio, the mapping for any data migration is separated in 2 different mapping types ensuring the highest degree of reuse of these specifications from migration project to migration project.

Target Map

The Target Map is founded on the description of the Target data.

The Target Map eliminates internal references and data that can be derived from other data, and exposes the data that cannot be derived and thus must be received. In addition, the Target Map can implement as wide host of runtime validations to ensure the highest quality possible of the target data being produced by the data migration.

The Target Map is strongly linked to the target system and this mapping can be reused in all migrations to the same target system. The value of improving/extending the Target Map is retained over time, from project to project.

From the Studio, it is possible to export the Target Map in two ways:

·       As an interface specification that can be imported into Studio when working on the Source Map (see below)

·       As a complete, structured specification that serves as input for the engine generator generating the target migration engine

Source Map

The Source Map is based on both the source data descriptions as well as the data requirements exposed by the Target Map.

While the target mapping exposes the data that must be received, it does so in the terms of the target system. In addition, all validation is founded on value sets known by the target system.

On the other hand, the Source Map describes how – based on the source data – to produce the data required by the target map.

Finally, the export can be exported from the Studio as a complete, structured specification as input for the engine generator generating the export engine.

The Studio is a Windows application running locally on a PC or laptop. The mapping produced by the Studio is a collection of (xml) files residing locally on the user’s machine.

While this enables an individual user to work on a given mapping locally on his/her Windows machine, the Studio can be backed by a central repository (a Sql Server database). The repository provides the functionality necessary for a team to collaborate on the same mapping (checkout, check-in and get-latest).

The investment in the mapping can be safeguarded by implementing a suitable backup scheme using the given facilities in Sql Server.


Studio exports the entire mapping to Engine, which then generates the code to execute the data migration.

The Engine as such contains the code generators generating the engine code as well as base class libraries containing common, supporting functionality for the generated code.

While the engine generator provides by far the majority of the code necessary to execute the data migration, certain migration rules may be implemented by hand. The generated code contains stubs for these rules making their manual implementation straightforward to perform.

The Engine contains the code generators generating the code to execute the migration. In addition, the Engine provide the base class libraries supporting the generated code.

Apart from supporting the generated code, the base class libraries also contain interface functionality that enables the Director to discover the generated code and call it perform the different steps in the migration.

Based on the mapping the generated code automatically handles by far most of the migration logic.

Visual Studio is the state-of-the-art Integrated Development Environment (IDE) provided by Microsoft to write and maintain .NET program code.

In practice, the generated code for a given mapping resides inside a folder in a normal Visual Studio c# class library project.

Manual rules are implemented in this Visual Studio project by overriding a virtual method provided in the generated code. This overriding method is specified manually in a separate file (using the partial class mechanism in c#) protecting the manual implementation from being overwritten by the code generator. This is a simple, well-known, mainline mechanism and the implementation of manual rules is indeed very straightforward.

  • Manual Rules can be created in Studio and implemented in Visual Studio
  • 1–1 reference between the mapping and generated setup in Visual Studio
  • Manual rules are separated from generated code, never overwritten
  • Code-completion support (Intellisense)
  • Renames/Deletions in Studio are caught as compile-time errors
  • Most rules are simple
  • Rules of any complexity can be implemented


Director is the migFx component that uses the generated engines to execute the data migration.

The user simply loads source data, populates value sets, executes the data migration and offloads the target data.

Using the Director, it is possible to iterate over the data migration in a very fine-grained manner. The user can iterate all business objects that generated a specific event during migration, or to iterate a specific business object etc.

In addition, the Director Runtime supports the operation and execution of multiple data migration projects across a host of different servers.

The Director is the runtime manager. It uses the Engine code to execute the data migration. Through the Director, the user loads source data, populates value sets, executes the data migration and offloads the target data produced by code. 

  • Handles all aspects of the runtime environment
    • Sequences Business Objects (resolving dependencies)
  • Iterates effortlessly
    • Everything
    • Per Business Object
    • Per Event
    • Limited number
    • On a key (e.g CustomerNumber)

While the class libraries described above contains the generated code, the manual rule implementations, the interfaces, indeed all the bits and pieces necessary to migrate 1 business object, the Director component is charged with the task of migration all business objects by calling the engine interfaces provided, one object at a time.

In addition, the Director is responsible for the housekeeping necessary to store all migration results, intermediary results as well as all events occurring during migration, and finally all audit information collected during the migration. It is the responsibility of the Director to keep track of this information, even when the migration is iterated repeatedly and events and even items may appear, reappear and disappear.

The complete Director component consists of two main parts:

Director Runtime

The runtime runs in a server setup and executes all the data migration processing. The runtime runs jobs on a server, performing a multitude of different tasks.

The most directly relevant jobs are for instance: Load Source Data, Perform Export Step, Perform Transformation and Target Step etc. But there are many different job types that in combination make up all functions necessary to run a data migration iteration from start to finish.

Director Client

The Director Client is a windows application that connects to the server based Director runtime and enables a user to orchestrate the data migration and initiate and monitor jobs as required.

While the collaboration and workflow around the mapping and the generated code takes place locally on a personal machine, once the engine libraries have been deployed in the server environment of the Director runtime, the rest of the framework execution flow takes place in the Director runtime and is managed using the Director Client.

While the Director Runtime is execution migration jobs, the user monitoring the job execution may view the events raised by the migration engine as they happen. In this way, it is possible to react quickly in case of serious and invalidating events.


The Tracker application displays the results of an iteration in a web-based interface. These results consist of:

  • All events produced
  • The data for all business objects used and produced by the iteration
  • Deltas showing the differences between iterations

In addition to the passive presentation of the results,  Tracker contains rich workflow functionality allowing the involved users to manage responsibility, comments and state (accepted, fixed, etc.) for all events.

The Tracker web application presents the migration results and events. As a web application, it is easily reachable by a wider audience, which may include users from the businesses involved in the migration. The aggregations in the Tracking web application provides a comprehensive understanding of the overall quality of the migration iteration including baseline comparisons that reveal trends since the last iteration.

In addition, the application surfaces detailed information for each migrated business objects providing rich support for the users to analyze the results and seek explanations for any issues.

Finally, the Tracker provides collaboration functionality enabling the users to keep track of the state of events (new, fixed, accepted, recurring etc.), to comment on events and to appoint some user to be responsible for resolving the event.

For any type of business object, the application presents:

  • how many business objects of this type that has been migrated successfully and how many were rejected during the migration
  • an aggregation of the events that occurred for this type of business object

The application enables the user to search or drill down to any specific business object to view:

  • the events that occurred for this business object
  • links to any related business objects (ancestors and/or descendants)
  • the data that were produced for each step in the migration process:
    • the data extracted from the source system
    • the intermediate result produced by the export engine
    • the result produced by the target engine

Tracker keeps track of the state of events (new, fixed, accepted, recurring etc), any comments, and makes possible the assignment of issues to users and teams.

Agile workflow

How does it all fit together in a workflow when a data migration project is operational?

When modifications to the mapping takes place concurrently with repeated migration iterations and ongoing tests of the migration results in a target system setup?

How does it even start up the first time migFx is put to the task in a new installation context?

The different maps sit on top of each other. The Source Map sits on top of the Target Map. This may give the impression that the entire target specification must be completed before progressing to the Source Map. However, this impression is far from the truth.

In fact, the iterative nature of the entire framework makes it entirely feasible to proceed in a less daunting and more efficient manner.

The best practice is to start out the Target Map of just 1 business object hierarchy and even within this one just a minor part of the entire hierarchy. Moreover, quickly proceed to the Source Map of the same, limited business object hierarchy.

In this way, it is very fast to start the iterative process of the entire framework. The framework calls for this vertical approach where elements in the specification in an incremental fashion are added little by little in Target and Source Maps.

Due to the ease of iterating the entire process of migration specification modifications, code generation, deployment and execution, new areas can constantly and seamlessly be added, and existing areas deepened and enhanced.

Once a first limited business object has paved the way and a migration track is operational including the tracking web application, the migration project is in process and the normal workflow is in fact in place. This is an easy iterative flow involving the tasks shown below.

  • Mapping are modified. New business objects may be added, existing business objects may be modified. Mapping are published and imported upwards in the mapping type chain as described. Modifications are constantly checked in to the common mapping repository
  • New code is generated. A user gets the latest version of the mapping from the common repository and publishes the generator input and runs the code generators
  • New manual rules may be developed, and/or existing rules modified
  • The engines are built and deployed in the Director Runtime environment
  • The relevant track is reset so the new engines are loaded, and execution iterated as needed. Either everything, or cherry-picking business objects in one way or another
  • The migration results and events are shown in the Tracker
  • Based on the tracking web application feedback flows back and results on modifications to the mapping and the cycle repeats

This iterative workflow is typical. Moreover, it can be very fast. Not counting time needed for modifying mapping and manual rules, the inherent processes performed by migFx to generate, build, deploy and execute can be measured in minutes.

It is common for a migration team to iterate the migration process in this manner many times daily.

Fast feedback

Feedback may be introduced into the workflow from several sources. Unexpected occurrences of certain events may be shown in the Tracker; a test in a target system test instance may uncover issues, etc.

migFx provides strong support for tracing problems, right from the event or problem in the target system, through the data flowing through the migration steps and back to the area in the mapping in need of a review.

While the above workflow enables the migration team to iterate to a very high degree, in many instances it is relevant to freeze a snapshot of the data migration iteration at a certain point in time.

Typically, whenever the target data result is unloaded from migFx and delivered to a test instance of the target system, it is beneficial to freeze a snapshot exactly corresponding to the data that where offloaded and delivered.  When the target system test instance is then undergoing tests to verify the quality of the migrated data, this snapshot can be used as described above to analyze and trace any problems uncovered by these tests.

At the same time iterations in the original can resume to allow the overall forward process of the entire agile workflow – now including corrective actions fed back from the target system test instance.

A special case of this snapshot is when the migration project finish and the migrated data are finally delivered to the production instance of the target system. In this case the snapshot may be kept for an extended period. Analysis of any future issues in the target system may be significantly supported by the knowledge of exactly what data was delivered by the migration project as well as all migration events. 

Freezing any migration snapshot in this manner is a simple matter of copying of the databases and other artefacts that comprises the Director Runtime track used by the migration iterations for the project.

  • Events in the Tracker and/or problems in the target system are directly related to a given business object
  • In the Tracker, all information for any specific business object is easily located. This information includes:
    • All events that occurred during the migration
    • All data for each of the migration steps (source data, export result, transformation result and target result)
  • For each of the migration steps the data is clearly organized in the same hierarchy of child business objects as is defined in the mapping

In this manner, migFx and the partitioning of the entire mapping and execution flow in business objects provide efficient means of analyzing problems and locating the exact area in the mapping in need of a review.

Extension points

The most important extension interfaces of migFx are the Valueset Provider interface and the Offload interface.

As a principle, the tasks of implementing these extensions are part of deploying migFx in a given context.

For all mandatory extensions, the framework does come with default extension implementations, allowing rapid initial deployment as well as opening rich possibilities for deeper, proprietary integration.

Below a list of the most common extension points, as well as their default implementations and examples of proprietary implementations.

Load Source Data

Populates the generated source data tables in the export database.

Default Implementations:

  • Read data content form an Excel sheet

  • Read data from an Sql statement executed against a database

Proprietary implementation sample
Direct database to database insert or bulk load (if direct access to source system database is possible).

Offload Target Data

The framework holds the created target data for each business object as an xml element. This xml element contains the child business object hierarchy – each business object in the hierarchy containing all target data for this object.

This extension interface receives these xml documents in order to further process the target data in the implementation context.

Default implementation
Offload to xml files. One xml file for each root business object. Each file contains a copy of all the xml elements for the instances of the business object.

Proprietary implementation sample
If the target data structures in fact represents rows to be inserted in a target database, it is straightforward to implement an offload extension that shreds the target data xml to one file per target table.

Import data structures

This extension interface imports data structures into the Studio to use as in the Source Map or the Target Map.

Default implementation
The Studio comes with a default extension to import source and target data structures from an Excel spreadsheet.

Proprietary implementation sample
Functionality to read the data structure directly from the table schema in some database managements system.

Populate value sets

This extension interface is used to populate value sets with data.

Default implementation
Read data content from Excel spreadsheets.

Proprietary implementation sample
Functionality to read value set content directly from the target system.


This extension interface – if present – is called by the Director Runtime for each business objects during migration. The interface permits the extension implementation to hand back audit data for the Director Runtime to store.

Another part of the extension interface is called by the Director Runtime to hand over the collected audit data.

Default implementation

Proprietary implementation sample
Commonly the audit data are used to reconciliate the migration results with an expected result from some other data source.


This extension interface – if present – is called by the Director Runtime for each root business object rejected during the migration.

Default implementation

Proprietary implementation sample
Commonly an installation will implement this extension to perform some action for rejected business objects. For instance, if bank account is rejected it is nonetheless imperative to place the balance on the account on some technical account in the receiving bank.

There are other specialized extension interfaces. Just get in touch to learn more.

migFx - Concepts Explained

Please input your name and email to download this file

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.