Best Practice, CRM, Data integration, Dynamics 365, KingswaySoft, Scribe

Part 6: Data Integration – Return of the Best Practices

Recap Best Practices: Part 2

Welcome back!

My previous blog discussed the use of Staging Databases as part of integration design and some tips and pointers whilst building a staging database (if you decide to use one that is).

My background with data integration is using either out of the box/free tools or 3rd party data integration software (like Scribe or KingswaySoft) and the methods for designing and building data integration have evolved over time. This blog will detail the best practices when dealing with performance issues that I have encountered over the years.

You can apply some (if not all) of these principles for bespoke data integration tools, such as:

  • Web Services
  • Console Applications
  • Utilising a Service Bus (like Azure for integration messaging)

This will be my final* post when discussing best practices with data integration, most of which can be applied when using any tool to complete the integration.

*Star Wars did not end with a trilogy.

Part 3 : Performance Issues and Improving Performance

Performance Issues

Integrating with Dynamics 365 can be a painful learning curve, some of the issues (that I would see as performance related) I have encountered are:

  • Understand of Entity Relationships (I mean who loves integrating with the Activity and Pointer Entities??) which have lead to poor profile design operations being completed by the integration.
  • Data import speed (e.g. records per second or records/second) slowing over time.
  • Network Latency/Connection errors which can arise at any time during the integration.
  • SQL Timeouts due to high load/queries on the D365 database
  • Error exceptions such plugins/workflow failing due to lack of resource or Dynamics 365 providing us with as little detail as possible
  • Server Resources (source related)
  • Database Log File growth (staging database – source)
  • SQL Locks (source)
  • And the list goes on

Additionally, big factor to consider with data integrations and loading data directly into Dynamics 365 is that this can only be achieved via two supported methods:

  • Directly through the Dynamics 365 interface, manually entering though forms, bulk editing or uploading a file using the out of the box import tool.
  • Via the API using one of the many free, 3rd party or custom tools available which connect directly to one of the Dynamics 365 endpoints.

There is no supported method for loading data directly into the Dynamics 365 database, and if anyone asks you to do this, just say NO. So even if you are using a Staging DB no SQL – SQL transfers are NOT allowed.

JustSayNo

With Dynamics 365 Online, Microsoft have protected customers by this method of tomfoolery by restricting access (or even visibility) to the Dynamics 365 Database!

The API

All Dynamics 365 data integrations must be made through the API, which means they:

  • Adhere to the Security Model of the User connecting to Dynamics and performing data load operations via their Security Roles.
  • Trigger any system events or custom automation (bespoke) when record operations occur (e.g. create, update, delete etc.), such as:
    • createdon is populated
  • Back-end relationships are created in the database with other records in the system
    • When records are created, CRM populates the createdby, owner and so on.
  • Caching of data/sharing of records stored to the PoA table.
  • And all additional other magical things system operations Dynamics 365 completes (using the Force!) (such as creating an Audit logs trail etc.).

Performance Implications

Considering the above items, we can see:

  • Firstly why loading data has to be (and can only be) loaded through the API.
  • Why performance maybe slowed by the Dynamics 365 and flowing through the application Layer to the database layer.

Whilst investigating why performance is being impacted, it also makes sense that we investigate ways of improving the performance of data integrations with the aim of improving:

  • Data load times and record/second.
  • The quality and reliability of you integrations

I will walk though a common data load scenario which will highlight a few of the performance issues are listed above and are usually encountered when completing a data migration to Dynamics 365.

I will then review the migration and make suggestions/ list my best practices on how to overcome or help mitigate these issues.

Sales Load Example

For this example, I will walk through a very basic scenario of importing data without using any special tips or best practices; this also assumes that the the data integration tool is importing 1 record at a time (discussed later on). This is a direct source to target data import and all values are made up for the purposes of this example.

If you have the requirement to load a medium size data set (e.g. the last 5 years worth of data) for a customer, and say:

  • 15,000 Accounts
  • 100,000 Contacts
  • 6000 Opportunities
  • 12,500 Opportunity Products

Dynamics 365 Online data migration from a data source.

We will use the example Sales load highlighted in my first blog, the load order would be as follows:

  1. Accounts
  2. Contacts
  3. Opportunity with grouped Opportunity Products

SalesLoad2

Account Load

The loading of the Account data into Dynamics 365 will be relatively painless; it is a simple profile to just perform a Create operation into the CRM database. The Account data set is relatively small and this will probably complete within about 20-25 mins.

When measuring import speed, you will probably hear people talking about records/second (which a 3rd party tool will calculate for you). The average Account import speed here is about 10-15 records/second.

Contact Load

The next step is to load the Contact data into Dynamics 365, this profile is slightly more complicated in the fact that we have to also search for an existing Account to firstly set as the Parent Account ID and also update it with the Primary Contact value.

So instead of the one load operation to Dynamics 365, we have 3 operations with each operation requiring round trips to and from  Dynamics 365 to the data integration tool:

  • Create (Contact)
  • Search and Update (Account and Contact)
  • Update (Account)

Because of the additional work the profile has to do, the import speed may go down to an average of 3-4 records/second as the profile is having to do more work (roughly 3 times). This will take approximately 7 hours to import 100,000 Contact records.

Opportunity Load

This process is actually grouping and loading Opportunities and child Opportunity Product records together and loading these in the same data profile and contains 4 primary operations:

  • Search (Account)
  • Create (Opportunity)
  • Search (Product)
  • Create (Opportunity Product)

With the added complexity of grouping related data together (Opportunity and Opportunity Products) at the source, the profile may only achieve an average maximum of 1-2 records/second; with a maximum of 6000 rows the load time will be around 50 minutes.

Opps

Example Review

Reviewing the Sale Process load example it can seen that increasing the complexity of a data profile can reduce the performance of the import speed.

For low-volume data loads, the time differences can be in minutes or hours which do not impact project timelines too much.

But, consider the impact to projects where the data sets contain millions of rows (i.e. high-volume); the potential increase in project timelines can be calculated in either days or weeks which your customers will not be happy with.

Identifying performance gains will be critical in these scenarios to reduce the time it takes to integrate the data.

Improving Performance

So, how can we go about ways of improving data integration performance?

  1. Environment Optimisation – manipulating the environments around the data such as:
    • Storage location
    • Server Specs (On Premise only)
    • Network Performance
  2. Data Profile Optimisation – breaking down the steps with your data profiles.
  3. Matching Optimisation – increasing the efficiency when matching to records in Dynamics 365.
  4. Record Ownership – assigning record owners in Dynamics 365.
  5. Record Status – Saving your record statuses until last.

Environment Optimisation

There will be different environments for every single data integration, and there will be different things you can try to help improve performance without touching the data integration itself. The changes could be either hardware (e.g. increase Server Memory) or software configuration (e.g. creating a database index) optimisations.

So lets look at the environmental changes we can complete to increase performance when integrating data with Dynamics 365 – first we can split out the environments as follows:

  • Source – You may have one or more source systems if merging from multiple systems; if you have a staging database this will be your source system for your data profiles when loading data into Dynamics 365.
  • Integration Tool Resource – this maybe under your control (i.e. On-premise server)
  • Transit – Network connection between Source, Target and the Integration Tool
  • Target – Dynamics 365 (Online or On Premise).

DataFlow

Source

I will use the Staging Database approach with data integrations to provide the context of the environment, if using a file then there is not too much you can do except clear down erroneous rows or the 63000 extra blank rows that Excel sometimes like to add.

So with Staging Databases, I will make the assumption that this is a SQL database (but same tips will apply to additional database types like MySQL.

Firstly I recommend you look standard best practices for optimising a database; I will list out a few of the key ones that you can actually perform yourself.

  • Create Useful Indexes.
  • Shrinking the database – often (if using Full Recovery mode).
  • Keep on top of Database maintenance plans.
  • Make sure the following routines do not interrupt/occur when your data integration is under heavy load as this will negatively impact the read speed and queries being executed against tables with the potential of generating SQL Timeouts:
    • Backup
    • Shrink
    • Index rebuild

The following tasks may require an Customer System Administrator to complete:

  • Limit your SQL Instance Memory to 1/2 or 3/4 of the SQL Server the instance is located on, leaving the rest available for the Operating System to use.
  • Log Files on separate hard drives
  • Increase the SQL Server allocated memory (to enable the above!
Integration Tool Resource

Some integration tools (like Scribe Insight) require their own Server with its own set of resources (and SQL database); I recommend using the following as a guide when choosing a location for your data integration tool:

  • Take the recommended System Requirements, and then double them.

For bespoke solutions e.g. console apps or web services, make sure they have:

  • Decent connection to Dynamics 365.
  • Adequate memory and processing power to function under extremely high load.
  • A Service Account with appropriate security privileges and access to Dynamics 365.

If these are hosted in Azure and connecting to Dynamics 365 Online, how about moving to the same data center as where your Dynamics 365 instance is located? This will reduce network latency by reducing the number of connections it needs to make to Dynamics 365 via other servers.

Transit

For all tools and bespoke solutions console apps or web services – make sure they have a decent connection to Dynamics 365 (as stated above).

  • Ensure that you are using a secure and encrypted HTTPS connection (applies to On Premise only).
  • Make sure all IP ranges have been authorised through your Firewall for Dynamics 365
Target

Loading data into Dynamics 365 can be affected by any number of items once the connection to D365 has been established. There are several tips I recommend completing whilst loading data into Dynamics 365:

  • Disable any automation such as Workflows and Plugin or SLA which maybe triggered on import create/update.
  • Turn off Email Synchronisation (just in case any get created and sent out).
  • Postpone some of the out of the box internal maintenance schedules in Dynamics 365 (e.g. bulk deletion tasks).
  • For On-Premise deployments, standard Database practices should be completed regularly.
  • Add more resources to the CRM and SQL servers, and split out the Front and Back End server roles onto different Servers with load balancing.

Data Profile Optimisation

We have looked at my recommended best practices from an environmental point of view, next we can look at tweaking the way your data profiles operate and suggesting improvements in logic.

Profile Load Order

The first item to think about is your profile load order, each data integration will require a profile order where the data should be ordered based on the required data to create that record type. The following diagram shows an example scenario for a typical data migration.

ProfileLoad
Example profile order for importing data into Dynamics 365

The decision could be made to either start with Leads or start with the Product Catalog (Products and Price Lists).

My best practices when planning a data integration and the profiles ordering logic are:

  • Understanding the essential “must-have” data for production go-live and build a import plan accordingly.
  • Identifying independent data sets and/or configuration data and focusing on these areas first unless it is not required for go -live or flagged as  “nice to have”.
  • Determining your key data sets which are prerequisites for other data sets (e.g. Accounts are required before you can import Opportunities or Cases).
  • Import smaller data-sets first for quick wins; if you encounter issues with a larger load and are required to be restarted/fixed then at least you have got some data in and you can concentrate of fixing the larger data-set.
  • Finish with Activities/Notes and Attachments; these are often the most complicated and difficult tasks to complete; Activities can also be related to any of the proceeding data (e.g. via the Regarding field) and so I will tackle the hardest items until last. Also – Historic Activities are most likely never going to change; these could also be imported after (not part of any critical path) production go-live.
Select Columns

When querying source data, only select the exact columns you need to process as part of your data integration. Any additional Columns queried, but are not used become a waste of resource and can impact the performance of data profiles.

So in an example source Contact SQL Query:

SELECT [FirstName], [SurName], [Email], [PostCode], [Status]

FROM

CUSTOMER

WHERE [STATUS] = ‘Active’

Instead of the below query which returns all Columns.

SELECT * FROM CUSTOMER

WHERE [STATUS] = ‘Active’

Pre-populate lookups

Populating Lookup fields often requires some element of validation to ensure that the record GUID exists; e.g:

  • Profile will first check to see if the record exists
  • Retrieves the record GUID
  • Sets the GUID in the create/update operation.

This means that for every lookup value that is set requires as part of the data integration, an extra query to and from Dynamics 365 is required before the Create/Update operation would occur. This will slow down the time it takes to complete the transaction for this row (e.g. the record/second would decrease).

On the Case Entity, you may have several lookup GUIDs to query before the Case can be created:

Lookup

If we had these GUIDS before we started the import (as we should have imported the prerequisite data beforehand), then we could populate these fields without having to perform the additional query.

This would reduce the number of steps the profile has to complete to just a single Create Case step and improve the performance of data integration import speed (records/second).

For smaller data-sets, this may not be required as the data does not take much time to load; but for the larger data-sets we can see that improvements to import speed can have a positive impact on the total time it takes to load the entire data set.

To record the GUIDS of the pre-requisite data (especially configuration/Account/Contact data) I will always write back to the source data row in my staging database with the record GUID in the original profile. Or, I will build a separate profile which query Dynamics 365 (as the source) and targets the staging database. This will literally just write the GUID back to the Staging database source row.

So back to my Case import example, assuming I had captured the record GUIDS of each lookup, I could either:

  • Join to the Account,Contact and Product Tables to include the GUID columns in my Case query.
  • Perform a simple SQL Update Script to populate these values (this will take seconds).
Larger Volume Data-sets

Identify your largest data sets, some of these may be simple but some of them maybe complex. The aim is to improve the import speed (record/second) and the larger data-sets will gain the most from performance improvements.

The downside is, importing large volumes of data will eventually slow down table performance in Dynamics 365, which also will have a negative effect when trying to match to records as part of data profiles.

Reduce the number of steps in the profile by completing additional ETL transformations on the Staging Database (if available) – like pre-populating lookup fields; aim for straight inserts or straight update.

Bulk Operations

For simple – high volume profiles, many tools (even bespoke Console Apps/Web Services_ can take advantage of the Bulk methods (ExecuteMultiple) provided by Dynamics 365 where you can submit data in blocks as part of a single transaction. These are designed to increase the throughput of your data to Dynamics 365.

Matching Optimisation

Nearly all data integrations will involve some sort of matching criteria, whether this is to return related record ID’s to populate lookup fields on import or when matching for either create new or update existing records.

Matching to records will often involve matching on key fields (in most cases a single field or key, in others the matching criteria may involve multiple field values.

Single key matching example – matching to Orders:

  • OrderNumber exact match

SingleMatch

Multiple key matching example – matching to Contacts:

  • First Name AND
  • Last Name AND
  • Email Address AND
  • Postcode all exact matches

MultipleMatch

Indexes

Whilst matching to these values may be required and cannot be changed. we can actually create indexes on these columns quite easily. In Dynamics 365, you can directly add these to the Quick Find View – Find Columns for that Entity;  Dynamic 365 will create a database index for you automatically, this is extremely useful for Online environments where you can directly affect the database without touching it.

Additionally – it may be useful to also create indexes on the source columns used for matching over larger data-sets.

Alternate Keys

Where unique records are required to be matched, it may be worth considering creating an Alternate Key in Dynamics 365. Ben Hosking has written a very good article on alternate keys here. This will also create an index in the CRM database which improves lookup times.

Record Ownership

Assigning records to User or Team can take a while even through the web interface, and this may be additionally impacted by cascading relationships to the records children (if applicable) where is child is also assigned to the User or Team. So this may have a negative impact on the data import speed

Where the requirement for records to be have their ownership during a data migration/integration, I considered it best practice on larger data sets or records where cascading relationship exists (e.g. Accounts and Opportunities) to either:

  • Recommended – Split out the record assignment into a completely separate profile after the data has been integrated with Dynamics 365. This will increase the performance of data integration initially creating or updating the record; the Assign action can then be completed post load.
  • Or have an additional relationship to the User Entity (i.e. Assign Owner), populate that during the migration and then have Workflow post Migration/Integration set the Owner to that of the Assign Owner field.

Assign

Tip: Ensure Users:

  • Have been created
  • Assigned the correct Product License (if Online)
  • Security Roles have been assigned which allows them to own the record type you are integrating with.

Record Status

Completing Activities or setting record Statuses can have the same impact that assigning record ownership can have (e.g. cascade behavior). Also, depending on when the profile sets the status, further updates cannot (should not be in most scenarios) be made to those records through the data integration; especially with Activities – you will receive a hard Dynamics 365 error.

So my recommended approach is to consider having a separate record status profile as this will increase the performance of data integration initially creating or updating the record; then a profile to set the record Status executed post load (assuming no further record updates are required).

RecordStatus

Summary

This completes the final blog*: Performance Best Practices blog for the data integration series, I hope you can start incorporating some of these principles into your data integration designs.

Thanks for reading,

MW

*I lied, there is another post to come with more best practices!

CRM, CRM features, Data integration, Dynamics 365, Scribe

Part 5: Data Integration – Best Practices Strike Back

Recap Best Practices: Part 1

Welcome back! I am currently revisiting my data integration series where I am highlighting data integration best practices that I have learned since my junior consultant days, these best practices are which I abide by when I am designing and building data migrations/integrations with Dynamics 365.

My previous blog discussed the following best practices:

  • Error reporting
  • Integration Accounts

Part 2 : Staging Databases

Staging Databases

The problem

It took several large Data Migration projects to realise that loading large amount of data from the source data set directly to the target system was not the most practical solution,  the size and complexity of the data migration profiles was very inefficient. For data migrations where merging of one or more source data sets was also required, it was clear this should not be completed in CRM and often required numerous clear downs until the logic was correct or post migration clean up activities to remove duplicate data were required. I required a solution that sat in the middle of both the source and target which would have the following properties:

  • Ability to merge the data sets.
  • Be a familiar platform which I know and is compatible with the data integration tools I used.
  • Allow for data transformation to take place before load into CRM.

The solution

With discussion with colleagues and reading recommendations from the CRM community, it was clear that a staging area was required. It would have never occurred to me to add another component to a data migration, my thought process regarding data migrations/integrations stemmed around moving data from source to target (with no via option) only.

Location, Location and Location

When I first started out with CRM, most of my projects where On-Premise based; customers had numerous on premise servers (mostly VM’s) but also dedicated SQL instances for various applications they hosted (including CRM).

Additionally, the data integration tools I used where also on premise (i.e. Scribe Insight), so where else better than to host the new Staging database in SQL! CRM is built on top of it, it reports from it and so does Scribe; perfect – a Staging Database.

Staging Example
Example Staging Database location

When I talk about staging databases, I refer to a new database which is independent of any data system I am integrating with. This will be either hosted in an On-Premise Microsoft SQL Server instance or Online in Azure SQL.

Tip if using Azure SQL – locate the Staging Database in the same (or as close to) the data-center/region of your Dynamics 365 instance to reduce the distance the data has to travel.

So, what are Staging Databases?

Staging Databases are used as a temporary storage locations between two (or more systems), they allow integration consultants to:

  • Merge multiple source (and target) data sets whilst de-duplicating the data.
  • Perform the standard ETL operations (Extraction of data, Transformation of data and Load of the data into the Staging database tables) so data is in the required format before loading into the target system.
  • Provide a complete view of all merged and collated data in a single location before it is loaded into the Target (such as row counts, data volumes and size).
  • Allow loading of transformed data into the target system without impacting on the Source Systems.

Why use them?

Staging Databases should be considered when there is either a large amount of data (high data volume) or multiple data sources need to be merged together and a fair amount of data merging or data transformation is required. Completing these tasks as part of a direct source-target data integration can impact the performance of the integration (and data systems) and data quality of the integrated data sets massively which can be prone to errors and failure.

Performing all the merging and data transformations in the staging database will increase the performance of the data loading into Dynamics 365 (which we all know can take along time); performance will be discussed later on in this series.

Data comes in all shapes, sizes, formats, localisations; throwing this all directly at Dynamics 365 will require significant conversion (e.g. Date Fields, Currencies, related records, Option Set mapping) which will reduce the efficiency of the data load.

Do not under estimate the importance of good quality and timely data, integrations that are very slow and frequently error will never be well received by the customer.

Data Integrations require time and investment, there is importance when designing a good and solid, quality integration. The same applies when designing and planning for Data migrations, these are often under estimated in every project, but these usually lead to the longest delays (time) that a project will experience.

A couple of years ago I attended a Scribe User Conference in Amsterdam, I was talking with one of the European Scribe MVP’s, Martin Buebl, he was discussing his latest data project (which went very well) but coined the phrase “You will never here anyone talk about a good data migration, only bad ones”. But one of the standout reasons (barring his expert knowledge of data integrations and the tool) is that it went very well was that he used a staging database to merge together a few systems (and by a few I mean a lot)!

Not every scenario requires them, for example:

Simple integrations where only one system with a small amount of data is transferred may not warrant the time and investment spent building such an integration. This also may have a Transactional/near real time requirement, where the addition of a staging component may slow down the integration time.

Think about your integration, consider if the design could benefit with a staging database and discuss it with your clients.

Building the Staging Database

Before you can use the staging database, you first need to define the schema of the Tables and Columns required.

When I design an integration which requires a staging database, I try to re-create the schema of the Dynamics 365 Entities and Fields (and data type) inside of the staging database that I will be mapping to. Whilst this may seem overkill and unnecessary to some, there is method to my madness as it actually makes it easier when building up mapping profiles and I find myself rarely having to reach for data mapping document (and this is a future best practice). There is an added bonus if the third party tool has “Auto Map” where similar or identical column names are mapped (Scribe does!).

datamap

I also learned how to create (and drop/recreate) database tables with SQL Scripts instead of manually creating these. Re-using scripts between projects can be especially useful for when using the out of the box entities and fields, these will not change and you just need to add the custom columns and datatypes that are bespoke to that customer.

Another benefit to recreating the Dynamics 365 schema in the staging database makes the data mapping profile simpler, you already know the target location and you will be able to reduce the number of target profiles you have (maybe even a single profile!).

Tips when using Staging Databases

  • Build the tables with a unique Staging Database ID’s which have an index (helps with lookup performance), so you can relate imported data back to source row and will help with Error reporting and row identification if you pass this information to CRM.
  • Build when you CRM Solution Design (data model and schema) have been completed, signed off and maybe even built so you that have a reference of what the target solution will look like and data mapping can be completed.
  • Build an Option Set table allowing you to map option set values to target Dynamics 365 Picklist values.
  • Think about building the error/transaction tables to aid with error reporting and investigation.
  • Convert the database to Simple Mode (to reduce database white space usage at the beginning).
  • Backup your Database frequently and immediately after you have finished building (Backup and Restore allows for faster clear downs than dropping all rows/tables).
  • Build and maintain (rebuild) Indexes across larger tables frequently on key columns that will be searched across (not really required for tables with low volumes of data).
  • Build useful Views which will join together Tables from a system perspective instead of using a Join in your data queries.
  • Avoid performing data integrations/ETL profiles during you maintenance jobs on the staging database!
  • If using an On Premise database, make sure the log files (MDF and LDF) are on separate drives.
  • Allow more than 4GB Ram!

Part 3

This concludes my best practices advise around Staging Database best practice and why you may wish to use one, the last article in this series will focus on:

  • Performance Best Practices
  • Additional Tips and Pointers

Thanks for reading,

MW.

 

 

 

 

 

CRM, CRM features, Data integration, Dynamics 365

Part 4: Data Integration – Best Practices

Revisiting the past

It has been a little over a year and half since my last data integration blog, I had started this blog but never finished it, and now have some time to review and actually finish this series (and to help me prepare for the CRMUG in Reading on March 1st!).

My previous data integration blogs can be found here:

Most of the content in the above posts are still relevant and valid; except Dynamics CRM Online is now Dynamics 365 after the re brand (yes – this was 2016).

Volume Reduction

I have decided to split out my the content for this topic into several articles, as I was writing the topic I realised that there was a lot to cover; so please be patient!

Best Practices

Replaying back to Part one of this series, I mentioned that Data Integration needed to be defined into the correct terminology, such as Migration, Integration or Replication.

Whilst the term helps define the data integration tools that we can use to meet that scenario, best practices can be applied to all types of data integration to ensure a consistent and uniform approach to all scenarios where integration with Dynamics 365 is required.

Data Integrations Best Practices will help:

  • Reduce time spent building up successful integrations.
  • Increase the performance of the integration (highly beneficial with large volume moving data).
  • Reduce time spent investigating issues and working to a resolution of these issues

Part 1: Error Reporting and Integration Accounts

Error Reporting

Types of Error

All types of data integration will at some point cause an error, there are so many working parts to an integration and errors that may arise are:

  • Source Data – the quality and format maybe incorrect even though the customer the adamant that they have fixed any errors.
  • String Lengths – too long for the target field.
  • Decimal/Whole Number data mismatches.
  • Assigned Owner does not have the correct permissions to own the new record.
  • Plugins triggering on update and expects a value present.
  • Target Data/Metadata – a recent change to the system may have changed the metadata of the field that is apart of the integration , the source data no longer maps correctly.
  • Source or Target availability, the systems have gone offline.
  • And the list goes on for a whole number of reasons.

The Learning Curve

When I first started learning how to use data integration tools, I often would focus on the core functionality that the tool offered. This would just be to perform the basic operations (such as Read, Create, Update) and to get the sample data to move from A to B.

Any errors that arose, I would ignore and investigate at the end and put it down to my lack of understanding/knowledge of the tool.

In some cases it would take many hours actually figuring what was causing the issue in the first place as there was no meaningful message presented at the outset.

As my exposure to different data migration/integration projects grew over time, if I encountered an error, I would learn new ways (based on the tool I was using) to investigate and identify these errors earlier on in integration picking up hints and tips from online forums and my colleagues. Ultimately, I leaned how to handle and prevent some of these error from ever occurring by building conditional logic and extra validation checks.

When you an encounter an error, there are two actions that need to be completed:

  • Investigation – The integration is failing and you do not know why – so time will be required to investigate what is causing failure and identifying the error
  • Resolution – Can you prevent/handle the error as part of your integration or how else will you resolve the error.

Error Reporting

If you are using 3rd party data integration tools, error reporting functionality should be provided as standard, and that if used correctly can, be very useful.

Scribe Insight has several tables in its internal database (known as the execution/transaction log) where it stores captured details for every data transaction (row) that it handles, the errors provide some very descriptive detail when an error is encountered (e.g. the length of the value  XYZ is greater than the length of field 123). You can also assign messages/output text to Steps/Actions that Scribe takes throughout the DTS package which you can additionally write back to your source data to help you read the errors in more meaningful language to you.

Additionally Scribe has some out of the box SQL SSRS reports which you can view from the application which can provide useful statistics on the import and help you group types of errors or data integration failures.

KingswaySoft

Whilst KingswaySoft is built on top of SSIS, this can output failed rows along with the errors whilst also parsing some very good error details and description to the failed rows (along with additional row information). This can be outputted to custom Excel spreadsheets and tables or a separate Error Reporting Database and Tables  (my preference as I have SQL scripts to recreate tables and schema for me!), then it is just a case of mapping these rows to the respective columns. This will allow you to view these errors in a meaningful way and help you diagnose the issues quicker.

With both tools, you could even have error logging Entities inside of Dynamics 365.

ERD Error
Example Integration Reporting Entities in Dynamics 365

When either tool encounters an error, you can write the record of the transaction to Dynamics 365 for visibility to Users in the Application (unless CRM down is the issue!) and the Users can start the investigation.

Error Reporting
Example writing to Error Reporting entities in Dynamics 365

Error Prevention

Whilst you will not be able to prevent some or all of the data related errors (i.e. the data is the customers and they should be resolving the data issues or instructing you what to do) you can prevent some of the errors you encounter.

Both packages allow you to determine what happens to each transaction if the tool encounters an error (data or system generated) – i.e. error output rows. Whilst the default action is to record these errors into your error capture area and move onto the next row, you could re-process the data with another set of steps the tool must take to try and correct itself.

For example, if you have an integration which imports Accounts, Contacts, Opportunities and Products, and each of these processes are happening at different times such as the Sales Process Load below. What happens if a required record (such as Customer) does not exist if you are loading opportunities? If the Opportunity is created without the Customer value present (a required field), this will cause an error.

Q. How could this scenario be handled as part of the integration?

SalesLoad
Example Sales Opportunity Data load – each load runs on a different synchronisation time from the external system.

A. By conditionally checking for the customer record to exist before you set it, instead of making the assumption that this will exist will prevent an error for being generated. This row could then be re-directed for a second attempt in the next Opportunity data load or wait a few minutes for the Account data load to catch up  (Send to Retry).

You will not be able to cater for every scenario, additional checks do come with added performance hits which I discuss later on in a follow up article discussing performance and are not always necessary.

Be proactive and think about these scenarios, build the data load logic to handle and prevent these types of errors from occuring.

Summary

Learn how to use the error functionality of these tools as part of using the product, they will save you invaluable time further on down the road and allow you to identify the root cause quicker.

  • Report the Error in a meaningful format
  • Handling of errors or scenarios to prevent errors

Some of the free tools, such as the OOB import wizard unfortunately do not always present us with a nice detailed error or have the ability to retry, they may even just give us a code! So when choosing a data integration tool, think about the requirement for error handling and whether investing in a 3rd party tool would outweigh the time spent investigating potential errors with Free tools (a consultant spending a day or two on an error may cost the same as a 3rd party tool!).

Integration Accounts

Service Accounts

The use of service accounts with Dynamics 365 is nothing new these days and is common practice with On Premise Dynamics 365 (CRM) installations to run CRM Services or external web services that connect to Dynamic 365 programmatically.

When discussing service accounts with a data integration context, always consider the size and complexity of the integration/migration and whether this would offer real value or not.

For small scale one-off data migrations or using the OOB import Wizard/Configuration Migration Utility tool , service accounts are generally not required; and for the example of the OOB Import Wizard – cannot be used as they cannot be used to access features through the user interface.

For larger data migrations or data integrations (where data will be integrated with one or more external systems over a period of time), a service account should be considered as best practice.

Service Accounts provide:

  • Tractability of integration: it will be easier to identify data that has been modified by an Service Account especially with Auditing and the Createdon/ModifiedOn fields in Dynamics 365.
  • Full control of which applications use the service account and can reduce the number of concurrent connections used to the Dynamics 365 server at any one time as to not interfere with connection timeouts
  • No Front End access, this will stop any one with the credentials from logging into the application and using the product with the Web Client.

Non interactive

Service Account in Dynamics 365 Online are know as Non Interactive users, you are allowed to setup 5 Non interactive user accounts per Office 365 Tenant. I recommend creating these in your first instance so that if you copy the instance, then you will not have to re-configure this User as a non interactive User.

As they are free, there is no reason why you should not use one and are very easy to setup!

Azure AD account (Online Only)

If considering an data integration/migration with Dynamics 365 Online, ask for an account with the default .onmicrosoft.com domain. (e.g. matthew.webb@companya.onmicrosoft.com). If a company has their on premise AD domain synced with Azure AD and something corrupts/interupts the sync and those Users cannot log into Office 365. The default .onmicrosoft.com would remain unaffected and the data integration would not experience the same issue (unless the source system was affected by the AD issue!!).

Security Role

Think about the Dynamics Security Role the integration account actually requires to complete its assigned task. At the beginning of a data integration, System Administrator will get over the initial hurdle permission issues (especially in the increase in security requirements are drawn out) and may help reduce the initial build time for the integration. For UAT and Production, the integration account should only require the permissions it actually needs to complete its task.

  • This would stop this user/integration from being used to update/delete/create other data sets that it was not intended for. Essentially it locks down a potential weak spot in the security architecture.
  • The account would only see the metadata it requires for the data integration (which would help with metadata refreshes at build level also).

Name

I would generally name an integration account along the lines of “Data-source Integration User” or something that would easily identify the account which is being used to integrate the data-source.

Summary

So to summarise the Integration Account:

  • Think about using a Service Account (or Non Interactive Account) for data integrations.
  • Use a default .onmicrosoft.com Azure AD account if allowed to. (Online only)
  • Security Role: Give only the permissions needed to perform the data integration.
  • Choose a good name!

Part 2

I have many many for tips, tricks and best practices still to give, such as (not the full list either):

  • Performance improvements (increase that Rows per Minute counter) and other considerations
  • Matching for data.
  • Indexing.
  • Maintenance Schedules.

Thanks for reading,

MW.

CRM, CRM features, Data integration

Part 3: Which Data Integration Tool should I use?

Recap

In my previous blog I introduced the following topics:

  • Supported and unsupported data import methods
  • Identified two categories for data integration tools:
    • Free
    • Cost-associated tools
  • Examples of the data integration tools available to you

Data Integration Tools

I have previously stated that not each data integration tool can be used for each data integration type. Why is this you ask? To differentiate between the tools I will give a short summary of each tool I have listed previously and highlight which Integration Types the tool can be used for.

Free Tools

Data Import Wizard

The data import can only be used for standard Data Migrations from a single comma-separated values (.csv), XML Spreadsheet 2003 (.xmlss), or text (.txt) source file into CRM. So any data migrations where a legacy system is involved – if you cannot export the data from the source system into one of the above files formats,then this tool will not meet your requirements.

This tool can:

  • Create Data maps which can be reused
  • Can map to parent records using one field only (unique value needed)
  • Can utilise the out of the box duplicate detection rules

The tool has the following additional limitations:

  • Cannot import at a scheduled time
  • Manual executed import
  • Has a row/file size limit in one import operation

Configuration Migration Utility

The data import is generally used to move records from a single source CRM to a target CRM organisation; this is mainly used for moving definition/configuration records where the GUIDS are required to be intact/identical across the source and target CRM organisations. In practice – it is not used as a full on data migration tool (although it could but very tricky and not designed/advised to be used as such) but is used in solution deployments where these records are used in the functionality/solution(s) being deployed.

This tool can:

  • Import Dynamics CRM record data with relationships (assuming parental records are the identical across both systems)
  • Export Dynamics CRM record data with relationships (assuming parental records are the identical across both systems) to keep as a backup or use if restoring an Organisation

The tool has the following additional limitations:

  • Cannot import at a scheduled time
  • No filtering criteria to exclude records – it takes all records for that particular entity and either all or selected attributes
  • Manual executed import
  • Microsoft Dynamics CRM to Microsoft Dynamics CRM only

XrmToolbox

Manage N:N Relationships

The default out of the box data import wizard can only import records directly and populate N:1 relationships; it cannot handle N:N relationships. The ‘Manage N:N relationships’ plugin which can be installed into the XrmToolbox; this tool can only be used to create associations between two existing records and requires 2 unique values to match to the records (this does not have to be the ID’s). Only .txt or .csv source files may be used and the records must exist in two columns (the tool allows you to choose a separator). Normally, the general use of this plugin is in conjunction with the data import wizard when uploading data from source spreadsheets – and can only be used for Data Migrations.

This tool can:

  • Import Dynamics CRM record associations over N:N relationships (assuming unique values are provided for both associating records)
  • Can do deletions (disassociations’) in bulk – useful for fixing data on an ad hoc basis.
  • Tells you which data rows fail

The tool has the following additional limitations:

  • Cannot import at a scheduled time
  • Can only perform associations
  • Manual executed import
  • Manually create the source file

MSCRM Toolkit

I actually made an error and got slightly confused when referencing this tool – I incorrectly referenced the Data Transporter which is apart of the XrmToolbox – when in actual fact the tool is called (which is apart of the MSCRM Toolkit):

Reference Data Transporter

The Configuration Migration Utility allows users to migrate configuration records between CRM organisations, the Reference Data Transporter goes one step further and allows your to filter the configuration records from CRM and transfer them to your target organisation.

It also allows the records to be synchronised at a set time (i.e. to keep the records in sync); this tool can be used in both Data Migration and Data Integration scenarios. This Tool could additionally be used in basic Data Replication across two Dynamics CRM systems due to its scheduling and filtering (i.e. by last modified on) to update recently changed data.

This tool can:

  • Import Dynamics CRM record data with relationships (assuming parental records are the identical across both systems)
  • Export Dynamics CRM record data with relationships (assuming parental records are the identical across both systems) to keep as a backup or use if restoring an Organisation
  • Synchronise the Dynamics CRM record data at set times across Dynamics CRM Organisations

The tool has the following additional limitations:

  • Microsoft Dynamics CRM to Microsoft Dynamics CRM only

Connector for Microsoft Dynamics

Microsoft provided a free (with valid software assurance) integration tool that would allow a custom integration between Dynamics CRM and one of the Dynamics ERP software systems (such as AX/GP/NAV/SL). The Connector provides an effective integration of Sales Order process originating in Dynamics CRM all the way through to Sales Invoicing in the ERP system.

The Connector for Microsoft Dynamics can be used in both Data Migration and Data integration Scenarios due to its two-way Connectors. Unfortunately – Dynamics CRM 2016 will be the last major Dynamics CRM version the ‘Connector for Microsoft Dynamics’ will Microsoft will support (announcement here) – this is most likely due to the recent announcement of Dynamics 365 and the interoperability framework  of Dynamics NAV 2016+ (i.e. integration with Dynamics CRM will be out of the box).

There was a Dynamics CRM to Dynamics CRM Connector that allows for Integrations between CRM organisations to take place i.e. to keep data in sync. This Tool could be used in basic Data Replication across two Dynamics CRM systems due to its scheduling and filtering (i.e. by last modified on) to update recently changed data.

This tool can:

  • Transfer data to both source and target systems in a unidirectional or bidirectional data integration
  • Can be scheduled to run at set times or on an ad hoc basis
  • Basic Error Level reporting and warning notifications
  • Easy to use interface

The tool has the following additional limitations:

  • Can only transfer records flagged for integration (requires record flagging initially)
  • Can only be used to transfer data between Dynamics CRM and the Dynamics ERP suite (out of the box).

Cost Associated Tools

Third Party Tools

Now I can go on and on about the next two tools – but I will keep it brief; the functionality will be roughly similar across each product and the same could also be said for the other third-party products I have already mentioned as functionality will be similar (except the location of the software).

Scribe Insight

Scribe Insight is the most recognised (and most used from what I can tell) data integration platform out there in the Dynamics community (including AX, NAV, GP etc.); so much so that Microsoft used it to perform their internal migration from CRM 2011 to CRM Online.

This tool can perform Data Migrations and Integrations to integrate/migrate multiple source and multiple target systems at the same time.

Scribe has the following (example) list of data connectors:

  • txt/csv/excel spreadsheets
  • ODBC Conncetions
  • SQL
  • MySQL
  • SalesForce
  • XML

Full list of connections for Scribe Insight here

Scribe Insight can perform Data Replication – but there are not really any default templates for this out of the box with Insight; it would not be hard to create a Data Replication process for the data you wish to replicate.

This tool can:

  • Transfer data to both source and target systems can perform omni-directional data migrations/integration/replications.
  • Can be scheduled to run at set times or on an ad hoc basis
  • Can be triggered by Events (i.e. manual or MSQ driven, file upload etc.)
  • Advanced Error Level reporting and warning notification administration and avanced error handling with Microsoft Message Queues.
  • Easy to use interface
  • Advanced Training material and courses/blogs
  • Full Support with valid Software maintenance
  • Great community to assist with issues

The tool has the following additional limitations:

  • On premise installation and (ideally) requires SQL Database
  • Time to train puts potential users off; people view Scribe as needing to have a “developer” mind (which is not true)
  • Cost to purchase the licenses (you can buy 1 off migration licenses or full software licenses)
  • Connectivity is not as advanced as Scribe Online

Scribe Online

The younger sibling of Scribe Insight – Scribe Online; originally used to perform basic data integrations with which the functionality and features have grown so much over the last few years that it really is a viable alternatice data integration platform to Scribe Insight.

Firstly – it is a cloud based (subscription licensed) iPaas which is accessed via a web browser; it currently is built on a Silverlight UI but is (without breaking any NDA i might have signed up with Scribe over the last few months) being rebuilt to use an HTML 5 interface (known as Scribe v2 – and can only be a good thing).

Scribe Online has a multitude of connectors available to it – full list can be seen here which give (in my opinion) Scribe Online the edge over Scribe Insight, but it also has a CDK which allows ISV’s/Partners to build Connectors for their own products or services which immediately makes Scribe Online more attractive – especially to cloud services like Marketo or SalesForce (and of course Dynamics CRM Online).

Scribe Online advertises itself to being a data integration platform based in the cloud – but it also offers a server called “Replication Services” which allows you to configure source entities for replication to a SQL DB (either on premise or Azure hosted); this is a very powerful and popular product and many customers use this for local SQL reporting and SQL backups. Scribe also offer a PowerBI Solution which is built on top of RS.

This tool can:

  • Transfer data to both source and target systems can perform omni-directional data migrations/integration/replications.
  • Can be scheduled to run at set times or on an ad hoc basis
  • Advanced Error Level reporting and warning notification administration
  • Easy to use graphical interface and very easy to pick
  • Advanced FREE Training material and courses/blogs
  • Full Support with valid Subscripton
  • Great community to assist with issues
  • On Premise and cloud or cloud only platform
  • Advanced Connectivity with the possiblity of building your own Connector
  • API access

The tool has the following additional limitations:

  • Uncertaintity in the product – due to its short life and ofter overlooked by Insight users
  • Cost to purchase the subscription resctricts the number of connections you can run
  • Silverlight only (i.e. IE is needed!)

Web Services and Console Apps

Web services (from an integration perspective) are custom applications designed to allow two or more systems to talk to each other’s API and allow the transfer of data between them. They are bespoke applications that will need to be written by developers who have knowledge (either working or documented) of both API’s and are specifically designed to do one task.

Console Applications are small (and streamlined) programs that are written to carry out a specific tasks – in a data integration environment, they are usually used for simple or complex data processing tasks; these tasks are usually scheduled via Windows Task Scheduler to run out of hours or every hour or so to update data systems (i.e. performing complex calculations and updating the source data set)/

Both these tools are bespoke – so they will be designed and built to meet specific requirements.

Web services are more aimed at Data Integrations where data needs to transfer between one or more target systems to keep them synchonised; they could be used for Data migrations between heavily bespoke systems where third-party tools fall short of connectivity.

Console Applications are designed to perform data processing in an addition to a running data integration integrations.

Summarise

I have given some points towards the data integration tools available and the data integration types they can be used with with some positives and limitations; the tools are just a sample of all the available tools to use (i.e. additional third-party products or other free tools), I just cannot list them all or give some overview when I have not personally had some involvement with. I hope these blogs help you choose the right tool for the right type of data integration.

Next Blog

In my next blog, I will highlight the main pointers and best practices that I use when undertaking data integration projects.

Thanks for reading! MW

Data integration, Uncategorized

Part 2: Data Integration Tools

Recap

In my previous blog – I briefly talked about data and the types of data integration. To recap – the three main types of data integration are:

  • Data Migration
  • Data Integration
  • Data Replication

Data import methods

Data can be imported into CRM via many different ways and either in a supported or an unsupported manner.

Unsupported data import

The main unsupported way some people get data into CRM is by direct database manipulation – Microsoft warns against direct insert/update/deletes directly database by using the phrase “Unsupported operation” (https://msdn.microsoft.com/en-us/library/gg328350.aspx#Unsupported ) i.e. they will not support your CRM system if you decide to play with the database.

Data imported into CRM via this method would not adhere to any security considerations (such as data restricted access for the users) and would not trigger any business logic (such as plugins, workflows etc.); basically the data skips the entire application layer of CRM.

In CRM Online – the SQL database is not even accessible so you are restricted to using the CRM Web services and this is the recommended approach (and the only approach I will ever use for data import*) for CRM on premise data integrations.

*the only exception maybe when I just need to read data out of CRM as a source data system where no update is needed

Supported data import

So when choosing a data integration tool – you will need to choose a tool that can import data into CRM in a supported manner (i.e. either through the UI or through the CRM API using custom application/web services or third-party data integration tools).

Data Integration Types vs Tools

Initially with CRM Projects choosing the correct Data Integration Tool to meet the Data Integration requirements (as these are not always known at this stage in the project), through the Diagnostics/ Analysis stages of a project the initial requirements will surface and the type of data integration will be known (i.e. Migration, Integration and/or Replication).

There are many data integration tools out there – but not all data integration tools can be used across the different types of data integration. For example, if a data integration tool is built to load data into CRM but not extract it from CRM then the tool is only really useful for data migrations/unidirectional (one-way data transfer) data integrations into CRM.

Data Integration Tools – what are available to me?

Firstly – there are many Data Integration Tools which I currently use to meet the data integration requirements of my customers; primarily these can be split into two high level categories:

  • ‘Free’ Tools
  • Cost-associated Tools (such as bespoke development or third-party data integration Products)

‘Free’ Data Integration Tools

The term ‘Free’ should be used loosely – mainly because they are free for everyone to use, but the customer is ultimately paying for your time (I hope!) to complete the data integration requirement.

There are many free data integration tools out there which can be used to aid you in meeting your data integration requirements; here are a few I used:

Config

Microsoft has released the following tool which are included as part of an Office 365 subscription (and will only apply to CRM Online Organisations) and is currently only in preview in certain geographic locations (so I have not used it yet)

Additionally – Microsoft has released some very interesting tools that can be used as effective data integration tools (subject to cost and licensing over the next coming months):

Flow.PNG

I will endeavor to explore these tools over the next few months to gain a better understanding of them; undoubtedly some of you will already be familiar with these already!

Cost-associated Tools

Cost associated Tools are data integration tools where this is an initial outlying cost that must be covered to use the tool – such as purchasing a third-party data integration tool or designing and building a custom data integration application (such as a console app, web service, SSIS package etc.) which your company will need to allocate the required resource to build the application.

Third Party Data Integration Tools

As part of my Job function involves Data Integration – I have become certified in both Scribe Insight and Scribe Online which are developed by Scribe Software (I will do a separate blog on these products at a later date describing each product and how they compare against each other).They are my go to Data Integration tools of choice – my biggest challenge of late is deciding which tool to use! (again more of this another day).

  • Scribe Insight is a powerful data integration platform that is provided as an on premise data integration tool
  • Scribe Online is one of the market leading Cloud based data integration platform (or iPaaS – Integration Platform as a Service) which can provide a complete cloud based data integration between cloud data systems (such as CRM Online and SalesForce)

There are other tools out there; I just have not used any really so I cannot comment on their strengths and weaknesses and where they can be used to meet certain requirements as I lack knowledge.

Custom Data Integration Applications

There have been times where the purchase of a data integration tool may not meet the requirements of the type of data integration (such as a web service being called to retrieve data from CRM and transfer it to a target data source) as the data integration tool may be too slow for this type of request. This is where CRM developers are required to build custom applications to act as a data integration tool.

Examples of custom applications are:

  • Web Services – these sit in between two data systems and communicate through each others API’s
  • Console Applications – applications which sit on computer services
  • Plugins, Custom Workflow Activities calling CRM Actions
  • SSIS Jobs/packages

To summarise

In this blog I have identified some of (the many) data integration tools available for you to use (but not told you how to use them – that I will leave up to you); in my next blog – I will highlight the types of data integration and which tools (listed above) can be used to help meet the data integration requirements of your projects.

Thanks for reading! MW

Data integration, Uncategorized

Defining Data Integration Types

Overview

Over the next few weeks I will write a series of blogs relating to data (based on my personal experience) – this will involve the following topics (from the outset – I may add more!):

  • Defining Data Integration Types
  • Identifying the correct Data Integration tool for the project
  • How to plan and manage Data Integrations as part of a CRM project (including my best practices)

Data

How many of your CRM projects have the vague Requirement of “Data”? I am willing to bet that 99% of your projects will have at least the basic requirement “Data Integration” such as importing Contacts or Account data at some point in your project (there is always that one customer who wants a vanilla system and will start afresh).

Every CRM project I have worked on in the last three years has involved an element of “data integration” – my role varies from (on a project by project basis and sometimes I perform all three roles in a single project):

  • Solution Architect/Lead Consultant
  • Functional consultant
  • Data Integration Consultant

Whichever role I am involved with in the project – my mind always considers the data requirements, the CRM solution being designed not only needs to match the business requirement and processes of the end-user – but the underlying data model should be designed to allow any historic or integrated data to be mapped correctly and maintain key existing information (such as relationships to other data). This means you need to involve data analysis early on in your project to make sure that the data can be imported into a suitable format.

Data Analysis

Analysis of source data (or data systems) early on will help you identify any potential tables and related attributes (fields and data types) early on in the project that you made need to custom build; it can also aid in mapping this data so you know what to expect further on down the line. The opposite side would allow you to see which areas of CRM have no concept in existing systems (such as Emails or Appointments etc.) and you can eliminate these areas early on.

The main output from data analysis will determine the type of Data Integration required.

Types of Integration?

The term Data Integration is loosely used when a project is incepted – what we need to define when we talk about Data Integration relating to a CRM Project is that we can split this term up into three main types:

  • Data Migration
  • Data Integration
  • Data Replication

Defining the type of Data Integration is key to determining the Data Integration tool we can use.

Data Migration

To put it simply – Data Migration is the process of transferring data from one or more data source systems or storage locations (such as a SQL Database, Spreadsheets or other files types, formats, or IT systems) into a single target data system. Once the data has been transferred from the source system to the target system, the source system is usually archived off and will no longer be used after Go-Live of a CRM project.

Some examples of a data migration:

  • Goldmine to Microsoft Dynamics CRM
  • Microsoft Dynamics CRM 2011 to Microsoft Dynamics CRM Online
  • Excel Spreadsheets to Microsoft Dynamics CRM
  • And So on…

mig

Data Integration

Data Integration is a little more complicated and covers a broader method of data transfer – in its simplest form it might be one of the same scenarios as mentioned in the data migration section but a continuous one-way data flow from the source system to a target system (and the source system is still used). This is known as a Unidirectional Data Integration (One Way Data flow).

Some examples of a Unidirectional Data Integration (One-way data flow):

  • Back office ERP System to Microsoft Dynamics CRM
  • Website Lead Generation Microsoft Dynamics CRM Online
  • Microsoft Dynamics CRM Online to Microsoft Dynamics CRM Online (where development/UAT organisations maintain up to date data for testing and training purposes)

1way

A Bi-directional (two-way data flow) Data Integration can be defined a data transfer between two data systems where data from each system is required to be transferred to each other.

Some examples of a Bi-Directional Data Integration:

  • Back Office ERP to Microsoft Dynamics CRM (and Vice versa)
  • Microsoft Dynamics CRM Online to Microsoft Dynamics CRM Online (where configuration data is kept up to date in all organisations – Dev or UAT and Live)

2way

Omni-Directional Data Integrations

The same as a Bi directional data integration but where data is transferred across more than two data systems.

Some examples of an Omni-Directional Data Integration:

  • Multiple Back Office ERP systems to Microsoft Dynamics CRM to SQL Staging Database to Front End Website Database
  • Microsoft Dynamics CRM Online to Microsoft Dynamics CRM Online (where configuration data is kept up to date in across all organisations – Dev, UAT and Live)

Omni

Data Replication

What is Data Replication? Well in the CRM Online world – a customer does not have direct access to the CRM database like an On Premise customer would have access to (because – you know, CRM sits on top of a big shiny SQL DB!) – so this makes writing SQL based reports impossible (restricted to Fetch based XML SSRS reports) or can they effectively control the back up of their live database. So the term Data Replication is the process of making a copy of a target data system either into a replica database or data system. (Such as CRM Online being replicated to a SQL DB for reporting or local backup purposes).

RS

Data Integration Tools

In my next blog – I will discuss the various data integration tools that I have used with my projects and I will mention other tools that I know of; but before we can talk about deciding which data integration tool to use – the type of data integration must be clearly defined as not all data integration tools can be used for each type of data integration.

Thanks for reading! MW