Accelerated Data Migration Executions

Increase the speed, quality and success of your data migration with the Hopp Core component, the executable 'engine room' for a simpler, faster data migration

Traditional data migration development is costly and time-consuming

Conventional data migration either requires months of software development and data engineering or a highly skilled team of ETL developers using expensive commercial solutions. Instead, use Hopp Core component to deploy and execute your data transformation.


  • Code generation

    Leveraging Core’s automatic code generation capabilities, the mapping specifications from Studio are quickly and easily turned into code ensuring fast and reliable execution with no discrepancies between the specifications and the code.

  • Efficient Functionality

    The Core’s runtime environment executes the generated code and ensures the created code works reliably without problems. Think of it as an intelligent helper that automatically creates rules and migration code using Studio's preset mappings.

  • Multi-Project Support and Execution

    The Core component simultaneously handles multiple data migration projects on multiple servers. Its adaptability and scalability shines, managing multiple projects at the same time. This is it particularly useful in large complex projects.

Core Component

How It Works

The Core is your go-to tool for generating migration code based on the Studio Mapping. It provides you with an extensive execution framework and supporting functionality to supplement your generated code making it a robust and complete runtime engine.

White Background


Core equips you with the tools you need to deploy the generated code, add bespoke transformations, and manage the migration process easily. 

How to Generate Code in the Core

While the Core generator takes the lead in generating the bulk of the code essential for executing data migration, certain migration rules may require a human touch.

Hopp Placeholders for Rules

Placeholders for Rules

To make this easier, the code created includes standard "stubs," or placeholders, for these rules. This possibility facilitates manual code extension and ensures developers can quickly and safely add bespoke rules to their own rules to the migration code.

The generated stub is in effect a virtual method in a base class created by the generator. This base class is marked as partial which is a c# keyword that allows a class to be implemented in different files.

The manual implementation is then provided by the developer in another file that is not touched by the generator. This mechanism ensured a robust and type-safe correlation between the code generated based on the Studio mapping and the implementation of the manual rules.

Automation Module (optional)

Hopp delivers a PowerShell module that can be used to automate the jobs that are normally submitted manually from the Portal Operations user interface.

By far - and indeed for many years - the normal use-case has been that one or more team users (operators) access the Portal Operations and from there submit migration jobs by manually identifying Business Objects, Views, Tables, Valuesets etc and submitting job to load, export, import and so on.

While this will certainly always be the case, there are some important benefits brought to the table by the PowerShell Automation module:

The Hopp Automation module consists of a series of PowerShell cmdlets to list items like Business Objects, Valuesets, Views Tables etc. Other cmdlets create parameterized jobs that are ready to be submitted. Parameterized jobs can then be submitted, and finally, a cmdlet can wait for submitted jobs to finish.

Especially interesting for the interactive use case, the user can create a schedule to combine sets of ready-to-submit, parameterized jobs. The schedule can then be submitted as one unit, and if it terminates due to a faulted/cancelled job, it can be restarted to resume execution from the point of failure.


Yes, definitely. This software component is created to work with rules of all levels of complexity so developers can implement rules that fit the needs and details of their projects.

Bear in mind that the Core generates the vast bulk of the migration code automatically. On one hand, developers only need to implement bespoke rules for bespoke transformation logic. On the other hand, Core allows bespoke rules to implement transformation logic of any complexity.

The Core contains the code generators generating the code to execute the migration. In addition, the Core provides the base class libraries supporting the generated code. Apart from supporting the generated code, the base class libraries also contain interface functionality that enables the Core to discover the generated code and call it to perform the different steps in the migration. Based on the mapping, the generated code automatically handles, by far, most of the migration logic.

In some cases, the mapping contains the information describing the interface to a manual rule that the generated engine will call. In these cases, the generated code will include a default implementation of the rule. This default implementation will report an error event to the Core Runtime, that this rule has not been implemented.

Certainly! You can request a demo to witness firsthand how Core transforms your data migration process. Contact us to schedule a demonstration or contact our support team.

Bespoke rules are implemented in Visual Studio by overriding a virtual method provided in the generated code. This overriding method is specified manually in a separate file (using the partial class mechanism in c#), protecting the bespoke implementation from being overwritten by the code generator.

This is a simple, well-known mainline mechanism, and the implementation of bespoke rules is indeed very straightforward.

Normally, in a data migration setup of any significant size, the Core Runtime will run on one or more dedicated servers.

However, in a tiny setup, it is perfectly possible to establish and execute the Core Runtime on a local machine as well. A complete Core setup leveraging the full execution facilities of the Core includes the ability to iterate concurrently over separate data migration projects in the Core setup.

In this full-scale scenario, the Core can execute several isolated migration projects on one or more dedicated servers in so-called Tracks. One server may host multiple tracks, and the Core can handle multiple servers – each with multiple tracks.

Book a demo!
Hidden Fields

By clicking on “Submit” button you accept the Privacy Policy and Processing of personal data.