jump to navigation

What am I doing now-a-days over last few months..? January 30, 2014

Posted by msrviking in General, MySQL, Performance tuning.
Tags: , , , ,
add a comment

I had been busy doing stuff on SSIS for few months, then on MySQL from the last post I have put up in the blog. What is that I am doing on MySQL and writing a post on a SQL Server blogging site? Well, I am kind a trying to get hold on how MySQL works, while I am trying to stabilize the performance of system. Surely I am not a MySQL geek to look at OS, Hardware, MySQL configurations deeply, but with little knowledge I had and gaining as time is going by I am trying to troubleshoot performance of the queries, indexing, and partitioning tables. These are the few things that I have been trying to put in place even before I get on other levels of performance engineering.

A thread is already on if I could shard the database, introduce read-write splitting to provide a scale-out solution by using out of the box features like MySQL cluster or customizing by partitioning tables, sharding them into different nodes, read-write splitting using MemCache.

These are lot of thinking in terms of MySQL but then I don’t have such flexibility in SQL Server although there are some implementations that use read-write splitting, using load balancers at application level and not database. I am highlighting some of those that are not there in SQL Server, and available to be used at fullest in MySQL. But then there are many of those missing in MySQL which is so good to use and work out things in SQL Server.

Some of the ones that I am missing MySQL badly are

  • Profiler
  • Graphical execution plan
  • Indexes with include
  • Free and easy to use monitoring tools for OS, Hardware
  • Multiple query plan algorithms
  • Proper documentation (implementation or bugs)

This post is to share what I am seeing those top few things in MySQL and few things that I am missing when I think of SQL Server. It’s kind of missing or home-sick post I would say.

I will keep writing as and when I learn new thing and definitely put a comparison with SQL Server features.

Happy reading.

RIP MCM……my mourning September 4, 2013

Posted by msrviking in Career, DBA Rant.
Tags: , , ,
add a comment

I didn’t want to write on this sudden happening, but then I couldn’t hold my anguish after reading so many posts, articles, and I thought I should share my quota of wailing. Two things broke my decision – an InsiderMail from Paul Randal (@PaulRandal | Blog) on 9/3, and the post from Satya (@SQLMaster | Blog) on 9/3. Both the industry experts came up with their feelings on the sudden happening, coincidentally on the same day.

MCM is discontinued and this is what gave me a shocker, a stunner. This post talks about on how Microsoft has decided to get it off its list. I kept spending few hours reading different articles, posts, news online, news feeds with the key words MCM and realized I am hearing, and actually bearing the brunt of the fact that Microsoft has decided to retire the MCM program. If I have to be honest to myself, I was trying hard to get closer to my dream and goal of obtaining a MCM. Now looks like I wasn’t getting anywhere near, apparently, when something has holding me back – retirement of MCM on 1-Oct-2013. Not a good news at all for folks out there who have probably dreamt like me, and put in more hours to get one or trying to get that ultimate achievement. Microsoft has denied that feeling to all those fans of SQL Server who worked years on the product, by just snubbing with a pull-off-the-plug action, on the program. The death of MCM is going to be quicker than retiring the famous reader – Google Reader. It took Google few months to bring down the service, and Microsoft is going for the death-punch in less than 2 months, huh. A known play as usual, and one without any breather by Microsoft. Nice job, uh.

I could go on ranting my displeasure and disappointment on the news, but then what’s the use of crying for what has happened. The work would get back to usual after few days everywhere, and everyone will visit their priorities. However someone has to keep up the fire and choke the guys who have done this abrupt thing. So go ahead and please, please vote up for a connect item on MSFT website created by Jen Stirrup – a SQL Server MVP protesting and asking back the MCM program.

One last thing – another blogger has shared his displeasure on this post, and made sense to me http://michaelvh.wordpress.com/2013/08/31/microsoft-is-retiring-the-mcsmmca-program/

If you happen to read this post, please share with as many as possible so that there is movement to reverse the decision or probably come up with something better than MCM at least. Don’t forget to vote-up if you don’t want to share this post.

Thanks for reading the bereavement news.

Sad Feeling – Shyam Viking.

SSIS is for ETL, ELT or EL….clamor continued September 3, 2013

Posted by msrviking in Business Intelligence, Data Integration, Design, Integration Services.
Tags: , , , , , , , ,
add a comment

In my yesterday’s post I had mentioned that we had to design and implement a SSIS solution that does EL, and had my own gripe about addressing only the non-functional requirements. Also it was just about a process of extraction & loading – EL.

Today let me start off with the definition of ETL, ELT, and then the new word with EL where T is missing, or supposedly not to be considered here. While writing this post I realized I should put across few notes on what an ETL or ETL means, or probably the differences between these implementations. I could have probably listed the points as a neat comparison table but blame it on my laziness that I am sharing these links. Trust me I have read these links, and I completely acknowledge the technical content’s relevance to ETL vs ELT.

http://www.dataacademy.com/files/ETL-vs-ELT-White-Paper.pdf

http://blog.performancearchitects.com/wp/2013/06/13/etl-vs-elt-whats-the-difference/

At the end ETL would mean that you have a powerful infrastructure in your destination, to handle transformation using SET operations, but then you would need a staging area to do the transformation and push the data into the destination tables. Whereas in ELT processes, the tools have the power to do the transformation in parallel infrastructure setup and in-memory, and away from the destination systems. After the data is transformed it is loaded into the databases for business to consume.

So having these principles in mind, we had yet to implement the EL instead of ETL or ELT. I personally didn’t find the approach to be good to implement, and which could have been done easily without any development effort using a CDC for Oracle in SQL Server, Replication. However I found some finer pros of this approach, and I remember noting those in one of my earlier post over here. Those few are,

  1. SSIS doesn’t need any additional changes to be done on the source end, except that we configure a connector to pull the data.

    Whereas Replication from Oracle had to create temporary tables, triggers in the source schema which probably is an overhead along with changes on the source schema. This is not definitely acceptable by any business or system owner. CDC for Oracle in SQL Server helped to a larger extent to address the above problem, but then enabling certain properties of the source instance would be overhead for security considerations.

  2. SSIS is essentially for a flow-control of data, and we could synch data on a pre-set precedence of steps. For example the master table’s data should ported first, and then the transactional tables.
  3. SSIS could be configured to pull data from the source based on business criteria by using parameters in the WHERE statements.
  4. SSIS gives finer control on error logging, handling and of course we could have retry packages to pull in data of failure.
  5. SSIS also gives me a perfect control on the defensive programming so that all business data is in place.
  6. SSIS could be tweaked, tuned to have optimal performance from extraction through loading.

Overall we will have control on what is happening when the data is extracted and loaded. Now I am wondering all these are perfect design guidelines for any ETL or ELT, and how I wished or rather would have not wanted on just an EL process, huh.

Cheers!

SSIS for Extraction & Loading (EL) only September 2, 2013

Posted by msrviking in Business Intelligence, Data Integration, DBA Rant, Design, Heterogeneous, Integration, Integration Services.
Tags: , , , , , , , ,
add a comment

There were series of posts earlier on the different ways to pull data from Oracle to SQL Server where I shared on how to port data either through Replication, CDC for Oracle, and few others like SSIS. I thought I will give a starter to a series of post(s) on how we went about picking up the option of implementing SSIS as a way to bring data from Oracle to SQL Server.

The major drivers (business and non-functional) for picking the solution were

  • The destination database on SQL Server should be replica of the source
  • No business rules or transformation need to be implemented when data is pulled
  • The performance of pull and loading should be optimal with no overheads or bottlenecks either on the source or destination

The destination system was mentioned as ODS, but it surely isn’t the same as Operational Data Store of DWH systems. Sadly the naming convention had been adopted by the shop where we had to implement the solution, and you might see me using the word ODS for sake of explaining the implementation, and I am sorry for putting the incorrect usage of the word. You would probably see little more of conventional or standards being skewed in the implementation, and of course a bit of my comments on how this could have been handled, better.

So the ODS was to hold data that would be used by downstream business analytics applications for the business users, with an intent of providing Self-Service BI. The ODS is to be of exact – in the form of schema, and data as in the source. The job was to pull data from the source without any performance overhead, and failures. Not to forget to mention that there needn’t be any transformation of data. At the end we use SSIS as only an Extraction & Loading tool instead of ETL – Extraction, Transformation and Loading.

This sounds simple, eh, but beware this wasn’t that simple because the design was to be kept simple to address all of the above factors. The SSIS package(s) had to handle these at a non-functional level,

  • The package should be configurable to pick and pull data from any set date
  • The package should be re-runnable from the point of failure
  • The package should have configurability to address performance needs – package execution time should be less, the source should not be overloaded, the destination should be full in its efficiency while data is loading.
  • The package should be configurable to pull data from source in preset chunks. The preset chunks could be based on a period basis – days /months /years, or number of records per period
  • The package should have the option to flag any other dependent packages to run or not run during initial and incremental loads
  • The package should have defensive design to handle bunch of different type of errors
  • The package should have error logging and handling at a package level, and at record levels

In Toto this solution was more to how do we design for the non-functional requirements and implement, leaving the functional side of the data. This type of implementation is half-blind and I will talk more on those cons we had to face, when I get in to details of each implementation.

A post after few weeks of gap, and one with learnings and summaries on how it could have been better.

SQL Server 2012 CDC for Oracle July 5, 2013

Posted by msrviking in Architecture, Change Data Capture, Configuration, Data Integration, Heterogeneous, Integration, Integration Services, Oracle.
Tags: , , , , , ,
add a comment

Sometime in June 2013, I had to work on an alternate option to Replication to replicate data from Oracle to SQL Server 2012. The primary requirements were

  1. There should not be performance impact on the Oracle instance, and the CDC should work seamlessly with a RAC setup having multiple log file locations, and reading any DML changes invariably where the transaction happens (different nodes of the Oracle RAC)
  2. There should not be any additional objects (temporary tables, triggers) that should be created in Oracle instance unlike to what happens in Replication setup
  3. The DDL changes on the source should be manageable with appropriate configuration

Keeping all this in mind I brought up CDC for Oracle in SQL Server 2012 as the next best option and built a POC. As a result of the POC I had documented the whole of the steps or probably the implementation picture of the POC for my future reference. Today I decided to share it with you all so that you too could benefit the implementation.

So here goes the excerpts from the document. Sorry couldn’t convert the document into blog style, though.

Cheers and Enjoy.

Change Data Capture (CDC)

Overview

Change Data Capture (CDC) is way of tracking and capturing changed data on real-time basis.

The data changes because of DML operations like insert, update, delete these are logged into log files of the source database. The log files serve as input to the capture process which runs asynchronously. The capture process reads the log files and adds the change information to the tracked tables that are associated with the respective source tables.

The below diagram (as represented in MSDN) summarizes the process of change data capture.


The data captured in the tracking tables is consumed by SQL Server Integration Services to refresh the end business tables in an Operational Data Store (ODS).

Internals of SQL Server 2012 CDC for Oracle

The capabilities of change data capture is to read data from data sources and synchronize the data changes incrementally from source to target tables, and is real-time.

The Change Data Capture for Oracle setup has these components

Oracle Database Instance

The data changes happening on the Oracle source tables are captured into redo log files, and these redo log files are archived for reading by Oracle LogMiner.

The Oracle LogMiner reads the archived redo log files if the Oracle databases

  1. Archived log mode is enabled
  2. Supplemental logging is enabled

CDC Service & CDC for Oracle Designer

SQL Server 2012 has the new feature called SQL Server 2012 CDC for Oracle and is available as part of Service Pack 1 (SP1). The CDC feature comes with two installable as part of the SP1, and these files install CDC Service Configuration, and Oracle CDC Configuration utilities.

Console for CDC Service

A pre-requisite before creating a CDC service is to have a system database called MSXDBCDC created on SQL Server instance. The database is created by executing the step Prepare SQL Server through the CDC service configuration console. The database consists of tables that help in storing the configured databases, services and trace (diagnostics) information.

All CDC operations use the CDC service to synchronize data, and the service is created as Windows service. The CDC Service management console is used to configure CDC service however CDC services could be configured and run on any server, other than the ODS or Integration Services (IS) systems. Also more than one CDC service could run on a dedicated system.

Designer console for CDC for Oracle

A CDC for Oracle instance is configured under a CDC service, and multiple CDC instances can be configured against an Oracle database using the CDC designer console.

A CDC instance for Oracle

  1. Has a connection string to Oracle instance. The connection string is configured against scan IP name, or VIP name
  2. Has multiple source tables (with respective primary keys) and with selected capture columns. Each source table with capture columns has a respective supplemental logging script which is executed on the Oracle instance. The supplemental logging on the table captures the data changes on the columns into redo log files
  3. Creates a CDC enabled database on SQL Server
  4. Has an option of enabling advanced properties like cdc_stop_on_breaking_schema_changes, or trace. These properties help in managing the instance behavior upon a schema change in source table, with additional diagnostics for troubleshooting any issues

SQL Server

    CDC enabled database

A CDC instance creates a CDC enabled database and each of this database has multiple mirror tables, capture instances or tables representing each source table, and the system tables that provide information on the change capture objects, columns, DDL history, health of the CDC instance.

The CDC enabled database is by default created with case sensitive collation, and could be changed by modifying the database deployment script created by the CDC instance designer.

Mirror tables

Each source table in Oracle has a mapping mirror table in CDC enabled database in SQL Server. The mirror table is a similar copy of the source table, except the data types and length which are different in SQL Server. Any schema changes on the source table is verified by CDC process by checking against schema of the mirror table.

The mirror tables are always empty, and should not be used for as business tables. The mirror table is identified with a specific schema owner, and are not created under default dbo schema.

     A sample name of mirror table is CDC_POC.CDC_POC_DEPT.

Capture instances

Each source table has a capture instance, and the capture instance is created as system table. The CDC process populates capture instances with data that is changed in source. These tables has columns that represent the source or the mirror table, and columns that store the Start and End Last Sequence Number (LSN), a sequence value, type of operation and update mask.

Here is the sample structure of a capture instance

Column Name

Purpose of the column

__$start_lsn

LSN associated with the commit transaction of the change.

__$seqval

Sequence or order of rows that the transaction has affected.

__$operation

The type of DML operation

1 – Delete

2 – Insert

3 – Update (old value)

4 – Update (new value)

__$update_mask

Bit mask of the column ordinals

The capture instance always has schema identifier as cdc, and ends with _CT.

A sample name looks like cdc.CDC_POC_DEPT_CT.

The capture instance has the information based on the DML changes that are committed on the source table. This table provides inputs in the form changes (data that is changed, columns whose values have changed, and the type of change) for SSIS packages to refresh the business tables.

The capture instance could increase in size based on the changes that are captured from the source. This table could be purged on regular basis in order to keep the size of the table and database per standards.

Business tables

The business tables contain the data that is used by the business applications. The business table should be created with any standard naming convention and could be created in any other database, and need not be in CDC enabled database. The database can be hosted on any other SQL Server instance.

The schema structure of the business table could be same as source or mirror table but the data types could vary. If the data types are different then the SSIS packages should have appropriate data type conversions.

The business table is initially populated through SSIS as a full load, and then the incremental loads are applied to refresh the data. The incremental loads fetch the data changes from the capture instance tables, and does an insert or merge operations on the end tables.

Overall process

    The overall CDC process is summarized in the below diagram.


The CDC service fetches the data changes from the Oracle source table. The CDC service connects to the Oracle database instance and invokes the Oracle LogMiner. The CDC process requests the LogMiner for the data changes that have been committed on a source table. The LogMiner starts the data fetching process from the archived redo logs, and returns the changes against columns that are enabled with supplemental logging.

The CDC service then populates the changed data into capture instances along with last sequence number, data that is changed in a column, and the type of operation.

The CDC service uses the CDC instance information to connect to an Oracle instance and to fetch data changes in particular source table for the specified columns.

Implementation Scenarios

Data Synchronization – Connecting to Load Balancer

The below diagram summarizes on a scenario where the CDC for Oracle instance is configured to Oracle RAC through the scan IP name.


Oracle Cluster

The Oracle instance is in a RAC configuration mode, where Node 1 xxxxx01.xxxx.net) and Node 2 (xxxxx02.xxxx.net) are load balanced and is connected by using a scan IP. The two nodes share the same storage area, and the redo log files generated by these nodes are archived and stored in file locations in the storage area. There are multiple file locations, and each location is created based on the day, month and year. Each folder could have multiple archived redo files.

SQL Server

The SQL Server system hosts CDC service and CDC Designer for Oracle tools.

The CDC service is created using the CDC service management console and is run as Windows service using Local System account.

The CDC for Oracle instance is created using CDC designer console. The CDC instance connects to the Oracle instance using the scan IP name – xxx-rac01-scan.xxxx.net. The CDC process reads the archive logs using LogMiner and writes to the CDC enabled DB. The data changes that are captured are written into the capture instance in the SQL Server.

The mirror tables are empty and are never populated either during the full load initially or during the data refreshes through incremental load.

    ODS

The business tables are created in the operational data store, and the table is populated with full load from the source for the first time. The data changes captured through the CDC process is applied on this table through incremental loads.

    SSIS

SQL Server integration services (SSIS) runs on a different system. The purpose of having SSIS is to perform a full load into the business table initially, and when the CDC synchronization is set the data refreshes happen on the business tables in the ODS.

There are two packages that perform the full load and incremental load. The below tables summarize the component used, purpose of each component, and the overall process executed by each package.

Package Name

Component Used

Purpose of the component

Purpose of the package

Initial Load Execute SQL Task Deletes the data from the business table before a full load The package is executed only once as a full load, and when the CDC is setup or re-configured.

This package deletes the destination business table, and populates latest data from the Oracle source table using Oracle Connector. After the data is populated from Oracle to SQL Server business table in ODS, system table cdc_state is updated with a LSN value that is referenced as next starting point to read the data changes from the capture instances.

Data Flow Task Populates data from the Oracle source table into business table using Oracle connector from Attunity
CDC Control Task Sets a LSN value in cdc_state (a system table created when the package is executed) table as a bookmark. This value will be referenced by the incremental package for reading changed data
Incremental Load CDC Control Task The task reads the last saved LSN value in cdc_state, and marks the process as start fetching data from the capture instance The package is executed after the first initial full load.

The package could be scheduled to fetch the data changes and refresh the end business table in ODS.

The CDC source components marks the processing range of the data based on the last saved LSN value.

The data is inserted in case there are new rows through the OLE DB destination task, or updates or deletes through OLE DB command.

After the data refresh is complete on the destination table, the process is marked as completed the latest value LSN value is saved into cdc_state.

Data Flow Task

The task reads the capture instance for All or Net data changes using CDC Source component from last LSN value saved in cdc_state.

The CDC Splitter component splits the data read from CDC source into three different operations – Insert, Update and Delete.

The CDC splitter directs the data based on the DML operation – To INSERT into OLE DB connection; To UPDATE and DELETE into OLE DB command.

CDC Control Task

This component marks the end of reading the data, and the state of the reading, populating data into business tables.


Data Synchronization – Connecting to RAC Nodes

The below diagram summarizes on a scenario where the CDC for Oracle instance is configured to Oracle RAC through the VIP names (individual nodes).


This section describes on an approach adopted to verify SQL Server 2012 CDC for Oracle works with Oracle RAC nodes, and multiple redo log files in different locations.

The first and second diagram above show that Oracle CDC instances are connected to Node 1 (xxxxx01.xxxx.net) through the VIP name xxxxx01.xxxx.net:1521/orcl and Node 2 (xxxxx02.xxxx.net) through the VIP name xxxxx02.xxxx.net:1521/orcl respectively. The CDC for Oracle reads the changed data from each node that is committed independently on each node.

As part of verification the following steps were executed

On Node 1 or Node 2:

  1. Logged onto to Oracle nodes using the connection string uname/pwd@ xxxxx01.xxxx.net:1521/orcl or uname/pwd@ xxxxx02.xxxx.net:1521/orcl
  2. Ran a SELECT query against a sample table under the same schema and captured the existent data in the table for validation, later
  3. Ran DML (UPDATE, DELETE, INSERT) statements on each node, and committed the transaction
  4. Validated the DML changes done on each node, by running SELECT statements
  5. Verified if the CDC instances were capturing the changed data by running SELECT statements on capture instances

The result after these steps was that CDC instances (configured against each node) were reading the DML changes done on each node, and is captured in capture instances on SQL Server.

The above result concludes that CDC for Oracle works with RAC environment where in data is read, invariably where the transactions are committed. This also confirms that CDC for Oracle instances captures data changes that are logged into multiple archived redo log files, in multiple locations.

The key to above conclusion is that CDC process fetches the data changes by calling the Oracle LogMiner which in turn reads the multiple archived redo log files in different locations.

The captured data in the capture instances is refreshed onto business tables using the incremental package process as described in section SQL Server under SSIS.

Schema changes

This section explains on the DDL statements that impact on the Oracle CDC instances, and steps to apply the changes on the destination database.

SQL Server 2012 CDC for Oracle does not propagate the schema changes on Oracle tables to SQL Server. However enabling an advanced property named as cdc_stop_on_breaking_schema_changes helps in capturing accurate and consistent data or latest data. This property could be set under each CDC instance under Advanced Properties tab.

The values that could be set against this property are 0 and 1. On the event of schema change on Oracle source table, and when the value of cdc_stop_on_breaking_schema_changes is

  1. Set to 1, then the data capture is stopped and this helps in capturing consistent data, with exceptions in few scenarios
  2. Set to 0, then the corresponding capture instance in SQL Server is dropped, and this helps in capturing latest data

If the schema changes are pertinent on the source, and if these changes should be propagated to capture instance in SQL Server, and the business table then a series of steps have to be considered.

The below table summarizes different DDL statements with its impact on the CDC instance capture process, and way to mitigate the schema change.

Type of Schema Change

Impact and corrective steps on schema change

Adding a new column to the source table Impact: If the cdc_stop_on_breaking_schema_changes is set to 0 then the capture instance drops the table

cdc_stop_on_breaking_schema_changes is set to 1 then the CDC will capture the data for the existing columns ignoring the new columns

Action steps: If the property cdc_stop_on_breaking_schema_changes is set to 1, if schema changes are to be applied

1.Backup the existing CT table and the cdc_state table

2.Drop the source table from CDC instance

3.Add the CDC instance with the altered source table

4.Import the historical data from CT to the new capture instance

5.Modify the SSIS package to capture the new column for initial and incremental load from the last LSN

Drop an existing column from the source table Impact: If the cdc_stop_on_breaking_schema_changes is set to 0 then the capture instance drops the table

cdc_stop_on_breaking_schema_changes is set to 1 then the CDC will stop data capture with an error

Note: A column is dropped on source table only if the supplemental logging on that table is dropped

Action Steps:If the property cdc_stop_on_breaking_schema_changes is set to 1, if schema changes are to be applied

1.Drop the supplemental logging on the source table

2.Drop the column on source table

3.Backup the existing CT table and the cdc_state table

4.Drop the source table from CDC instance

5.Add the CDC instance with the altered source table

6.Import the historical data from CT to the new capture instance

7.Modify the SSIS package to capture the new column for initial and incremental load from the last LSN

Rename an existing column on the source table Impact: If the cdc_stop_on_breaking_schema_changes is set to 0 then the capture instance drops the table

cdc_stop_on_breaking_schema_changes is set to 1 then the CDC will stop data capture with an error

Action steps: If the property cdc_stop_on_breaking_schema_changes is set to 1, if schema changes are to be applied

1.Backup the existing CT table and the cdc_state table

2.Drop the source table from CDC instance

3.Add the CDC instance with the altered source table

4.Import the historical data from CT to the new capture instance

5.Modify the SSIS package to capture the new column for initial and incremental load from the last LSN

Change in data type or data type length on the source table Impact: If the cdc_stop_on_breaking_schema_changes is set to 0 and if data type is changed from for e.g. int to varchar

then the capture instance drops the table

cdc_stop_on_breaking_schema_changes is set to 1 then the CDC will do below

1.If the data type change is from for e.g. varchar to int, then NULL value is captured although different row operations state (3 & 4) are logged

2.If the data type length is changed then row operation state 3 is alone captured

Action steps: If the property cdc_stop_on_breaking_schema_changes is set to 1, if schema changes are to be applied

1.Backup the existing CT table and the cdc_state table

2.Drop the source table from CDC instance

3.Add the CDC instance with the altered source table

4.Import the historical data from CT to the new capture instance

5.Modify the SSIS package to capture the new column for initial and incremental load from the last LSN

A schema change on the source table should be applied on the business tables through a series of steps and procedure to capture accurate and consistent data.

CDIs-Non Functional Requirements-Other few May 21, 2013

Posted by msrviking in Architecture, Business Intelligence, Data Integration, Design, Integration, Security.
Tags: , , , , , , , , ,
add a comment

In the series of CDIs-Non Functional Requirements I had covered on NFRs like Performance, Scalability, Availability, Reliability, and Utilization Benchmarks that could be useful to build a CDI system. In this post I shall talk about less known but important NFRs like Maintainability, Extensibility and Security. I wouldn’t term these as NFRs but questions on these topics will help you to get an insight on how the system is expected to behave, from a business and technical perspective. Also, these don’t work in silos, instead are linked back to some of the ones mentioned in the above list.

Maintainability

  1. What is the maintenance/release cycle?

    This question will give us an idea on the practices that is followed by the business and technical teams about the release cycle. And each release cycle would mean there would be change in the source system which may be applicable to down-stream applications. The more the release cycles, the difficult is the job of maintaining code base. And to avoid long-term overheads the system, data model and the low level design of the ETLs should be carefully built considering that this changes would be constant, and frequent.

  2. How frequently do source or target structures change? What is the probability of such change?

    This point is relevant to the first question, but elicits information one level deeper by asking “if more maintenance cycles, what are those changes at source and expected to be in target”. If the changes are constant, frequent and less complex then the data model and the ETLs have to be configurable to accommodate ‘certain’ changes in the source. The configurability comes with a rider and tradeoff on other NFR like performance. The changes on data source could affect the performance of the ETL and sometimes the business laid SLA can’t be met.

Now having said this, I presume the next NFR will be closely related with Maintainability.

Configurability

The answers to queries under this topic is supposedly to be challenging for the business, and technical teams. Not everyone is sure of what should be configurable and what shouldn’t be based on the changes that are expected from business at the source system level. One would get the answer of “not sure”, “may be”, “near future quite possible”, “probably” the source will change, and what change will remain as a question. The challenge of providing an appropriate solution at different layers will be a daunting task for the technical team.

  1. How many different/divergent sources (different source formats) are expected to be supported?

    The answer to this question will help in understanding what formats of sources (csv, tsv, xml..etc…) have to be supported, and if there is plenty of difference then alternate design practices could be implemented on the target which could provide extensibility to all formats.

  2. What kind of enhancements/changes to source formats are expected to come in?

    An answer to this point would help in deciding if there be abstract transformations or reusable mappings.

  3. What is the probability of getting new sources added? How frequently does the source format change? How divergent will the new source formats be?

    This information will help in knowing how often the sources format change, and is it with existing sources or with new ones. Again it would also help in deciding between abstract transformations or reusable mappings.

The last NFR is Security which is usually the last preferred in any system architecture and design, but most important.

Security

In case of CDI we are dealing with sensitive information of the customer and the transaction details. It is important to understand how business treats this type of data, and how do security compliance team want to consider the data is being gathered from different source systems and consolidated at a single place – “Data hub”. The below bunch of questions cover on the data protection level rather than access levels or privileges of different users.

  1. Are there any special security requirements (such as data encryption or privacy) applicable?

    An answer to this question usually would be “no”, but there are certain fields that are brought in from CRM and ERP systems and needs to be hidden from any misuse in case of breach of security. It is suggested that this question is explained well with a real scenario, and then a decision of having data or database encryption enabled or not could be taken.

  2. Are there any logging and auditing requirements?

    This is least required since the data from different systems is mostly massaged and made available in reporting format through a different data sink. A discussion in here would help in deciding if the security should be handled at reporting level (enabling different security features), rather than in massive data processes.

I hope all these posts on NFR for CDIs helps you in architecting, designing Data Hub system that is highly available, scalable, high performing, and most reliable.

Cheers!

Oracle to SQL Server Replication-Design considerations May 21, 2013

Posted by msrviking in Configuration, Data Integration, Design, Heterogeneous, Integration, Replication.
Tags: , , , , , , ,
add a comment

I am thinking I have covered all and best of the points for one to configure a transactional replication between Oracle and SQL Server in all my previous posts. In this post I would like to give my 2 cents on design considerations and limitations of Oracle publishers that I had to keep in mind during this exercise. There is a neat list of other points published in this Technet article.

  1. Configuration of Oracle Publisher with options

    Per the above article I learnt that the Oracle publishing can be done using the Oracle Gateway and Oracle Complete options. Few things that make difference these two options is that Oracle Gateway provides better performance but doesn’t let multi-transactional publications of a single to-be or published table. Whereas in the case of Oracle Complete option the table can be published in several publications, but lesser performance.

    I tried digging in Google to see on how these two options work as described, because I didn’t try on the setup I built to find the difference. I didn’t find any reliable sources that talk about difference in detail but one of the posts by Richard Douglas (Editor in Chief of SQL ServerPedia) gives a short description “Gateway is for high performance replication solutions. Complete is when you want to transform your data or to do row filtering”. I might have to try and see what it means in here.

    I remember seeing row filtering in a transaction replication using the option of Oracle Gateway, and I am yet to find out on how this is different when Oracle Complete is chosen. For now I shall park for future tries on this limitation and share my findings.

  2. It’s quite obvious with this point (supports few objects) because the similarities that Oracle and SQL Server have at object level implementation are few. So,

    What are the objects that are supported?

    1. Tables
    2. Index-organized tables
    3. Indexes
    4. Materialized views (as tables)

    What are those objects that cannot be replicated, but can exist on published tables?

    1. Domain-based indexes
    2. Function-based indexes
    3. Defaults
    4. Check constraints
    5. Foreign keys
    6. Storage options (tablespaces, clusters, etc.)

    What are the objects that cannot be replicated at all?

    1. Nested tables
    2. Views
    3. Packages, package bodies, procedures, and triggers
    4. Queues
    5. Sequences
    6. Synonyms

    Okay with this list we know what can be replicated, what cannot be replicated? I picked the list from the above post, but was hunting to find out why we can’t replicate these objects. One simple reason that keeps coming to my mind is that “some of the above objects in the ‘cannot’ list don’t have equivalents in SQL Server”. Okay, so aren’t there workarounds for these objects? For example Packages have bundle of programmed objects like packaged variables, cursors and few others which have to be converted through a specific process, and this specific process can’t be through Replication. That exercise could be migration of objects from Oracle to SQL Server, and surely can’t be part of Replication topology. Does this apply for simple objects also? I don’t have any technical reason to explain for this too so concluding it with my 2 cents. Meanwhile I shall see if I can find better “reasons”.

    Here is a link from Microsoft on what it all means to convert some of the above listed objects. This was one of the reasons that I helped me to conclude logically that this is part of migration from Oracle to SQL Server and can’t be part of Replication at a conceptual or architectural level.

Since my setup was to test the new Oracle Publisher I didn’t have to keep tab on other design considerations, but I am sure I will have to deal with others too when this gets on to full deployment mode than a POC.

I hope this short post helps you folks out there, and happy reading.

The usual ask – please feel free to share your thoughts or comments.

Cheers!

Oracle to SQL Server data synchronization–What are the other ways? May 17, 2013

Posted by msrviking in Architecture, Configuration, Data Integration, Heterogeneous, Integration, Integration Services, Replication.
Tags: , , , , , , , ,
2 comments

In this series I have been talking about environment setup, configuring the box with appropriate drivers, setting up Oracle Publisher and then short post on Transactional Replication. While doing this I had my bit of issues in setting and configuring the whole workflow so a parallel thread was running in my head asking only one question – “Is this the only way to synchronize data from Oracle?”.

The only one option which was looking viable was using SSIS, and with SSIS 2012 lot more features have come in to help the situation. I haven’t worked on using SSIS – CDC for Oracle per below links but intend to do it sometime.

Here are the few links I found when I Googled around on this topic,

1. Microsoft SQL Server 2012 Change Data Capture for Oracle by Attunity Topics (http://social.technet.microsoft.com/wiki/contents/articles/7644.microsoft-sql-server-2012-change-data-capture-for-oracle-by-attunity-topics.aspx)

2. CDC for Oracle in SQL Server 2012 (http://blogs.msdn.com/b/mattm/archive/2012/03/26/cdc-for-oracle-in-sql-server-2012.aspx)

3. CDC in SSIS for SQL Server 2012 (http://www.mattmasson.com/2011/12/cdc-in-ssis-for-sql-server-2012-2/)

4. Installing Microsoft SQL Server 2012 Change Data Capture for Oracle by Attunity (http://social.technet.microsoft.com/wiki/contents/articles/7647.installing-microsoft-sql-server-2012-change-data-capture-for-oracle-by-attunity.aspx)

5. Change Data Capture (CDC) for Oracle video now available (http://www.mattmasson.com/2012/09/change-data-capture-cdc-for-oracle-video-now-available/)

6. SQL Server 2012 CDC for Oracle – a Review of One Implementation (http://blogs.msdn.com/b/dfurman/archive/2013/04/15/sql-server-2012-cdc-for-oracle-a-review-of-one-implementation.aspx)

7. Troubleshoot CDC Instance Errors in Microsoft Change Data Capture for Oracle by Attunity (http://social.technet.microsoft.com/wiki/contents/articles/7642.troubleshoot-cdc-instance-errors-in-microsoft-change-data-capture-for-oracle-by-attunity.aspx)

Although I say that SSIS is the best option and yet to be tested I wanted to list the differences of implementation using Replication and SSIS. So I built a quick comparison table between Replication and SSIS from perspective of DDL and Performance to start, and this is work-in-progress. As of now I am unable to judge the timeline I should tie to close this table of comparison with other factors included. Nevertheless I am thinking this should help you folks too, just in case.

Feature Category

Feature

SSIS

Repli-cation

Notes

DDL /Schema Changes

- New column addition

- Change in mapped (source) column name

- Change in data type of source column name

- Drop column in the source table

Yes (customized coding and SMO)

No

- SSIS – SMO based programming can help in detecting changes at the source and help in remapping in destination

- Replication – Subscriber has to be dropped /deleted, Publisher should add new table or regenerate the snapshot in case of Oracle publisher

Performance

- On source table or system

Impacted

Impacted

- Replication – Triggers are created on the published tables to track data change at row level. Such triggers could be overhead to Oracle published tables. Triggers are created by SQL Server replication to capture data changes

- SSIS – the impact on the performance on the source table is lesser comparatively to replication since no triggers are created for tracking the changes of data. This could be achieved using CDC with SSIS 2012 for Oracle

- On destination table or system

Impacted

Impacted

- Replication – in case of large data changes (inserts and updates) transactional replication could cause performance bottleneck on the destination /subscriber tables because of DML statements cause lock escalation on the tables. Another option for large data changes is to use snapshot replication

- SSIS – the impact is lesser and can be controlled by batch operations

Synchro

nization

- Real-time

Supported

Supported

- Replication – transactional replication helps in achieving the real-time data synch up between Oracle and SQL Server however if the data changes are large then this could impact the performance. There is very minimal latency of data synchronization

- SSIS – with CDC for Oracle using SSIS 2012, real-time synchronization is achievable without any performance overhead, since no triggers are created on Oracle tables, and the changes in data doesn’t need any configuration or maintenance or setup issues because of the capabilities of CDC to create change tables in SQL Server. The data captured as part of changes is applied by SQL Server through the CDC tables

Please feel free to share your thoughts or comments. Happy reading.

Cheers!

Oracle to SQL Server Replication – Configure Transactional Replication w/ Pull Subscription May 16, 2013

Posted by msrviking in Configuration, Data Integration, Heterogeneous, Integration, Replication.
Tags: , , , , , ,
add a comment

This post is going to be short because I am thinking I shouldn’t give my bit of explanation on how to configure a Transactional Replication in SQL Server against Oracle Publisher because there is no difference that I could share.

Instead let me share few links that are well explained by few other SQL Server experts on how to configure SQL Server Transactional Replication except that you have to pick already configured Oracle Publisher in the publication list.

So here are those for your reference and as guidance.

The last link is the source for me to work on the design, implementation and troubleshooting of Oracle Publisher – SQL Server Transaction Replication w/ Pull Subscription. In few of my concluding posts of this series I shall talk about the design, deployment and issues I had faced.

Happy reading.

Cheers!

CDIs-Non Functional Requirement-Utilization Benchmarks May 16, 2013

Posted by msrviking in Architecture, Business Intelligence, Data Integration, Integration.
Tags: , , , , , , ,
add a comment

In this post I shall share my thoughts on how these questions would help in designing system to meet the response time, and performance. Some of these are about the history of the existing systems and few are about the future requirements. Lot of English, eh, so straight message is that answers to these questions will help me in gathering “baselines and benchmarks” of the past and future. And once I get to know I should have

  1. A system designed that performs with low latency (quick response time) for any operation
  2. Built an infrastructure (especially servers) that would meet future performance requirements

Let me group these questions based on purpose and tag with some key words, for easy understanding.

Q# Question Purpose – Keywords
1 How many rows are created every day in transactional DBs? To get current baselines on how many rows are generated from business transactions – row size, storage space, disk size, CPU size, computation power
2 What is the size of data that is generated across LOBs? To get current baselines on how many rows are generated from business transactions per line of business – row size, storage space, disk size
3 What is the agreeable processing time of data in data hub To understand future benchmarks of the data hub – CPU sizing
4 How stale can the reporting data be? To understand future benchmarks of the data hub – storage sizing
5 How much of the processed data and data used for reporting has to be archived /purged?
To understand future benchmarks on how the archived data would be used in reporting system – server sizing, storage sizing
6 How many years of data have to be archived in the system?
To understand future benchmarks on how the archived data would be used in reporting system – server sizing, storage sizing
7 How many years of yester year’s data have to be processed? To understand future benchmarks on how the archived data would be used in reporting system – server sizing, storage sizing, CPU sizing, RAM sizing
8 What will be year on year growth of data in the transactional source system?
To understand current baselines and future benchmarks on the data that needs to be processed – server sizing, storage sizing, CPU sizing, RAM sizing
9 What could be the other sources of data that could be added during a period of 1-2 years, and how much of these data sources could provide data to data hub?
To understand current baselines and future benchmarks on the data that needs to be processed – server sizing, storage sizing, CPU sizing, RAM sizing
10 What are the agreeable database system downtime hours? To understand future benchmarks on how quick the data processing should complete – CPU sizing, RAM sizing
11 What is the possible backup and recovery time required for the Data Hub and Reporting system?
To understand on how quickly should the whole data hub and BI system be back up and running – servers sizing, storage sizing, CPU sizing, RAM sizing
12 How many users will be added year on year for the reporting system, and what are the types of users?
To understand future benchmarks on how the reporting and data hub system resources will be utilized – servers sizing, storage sizing, CPU sizing, RAM sizing
13 Are there any other external systems that would require the data from data hub?
To understand current baselines of data processing and BI systems – server sizing
14 How many rows of the transactional /business system needs to be processed for the Data Hub?
15 How much data is currently processed for reports?
16 What type of data processing queries exist in the system which provide Static /Ad-Hoc reports?
17 What types of reports are currently available, and what is the resource usage?
18 What are the query profiling inputs for these data processing queries /reporting queries?

  • CPU usage
  • Memory usage
  • Disk usage
To understand current baselines of data processing and BI systems – server sizing

In the follow up post for this NFR I shall let you all know on how I translated as part of design and server sizing. Until then happy reading.

Please feel free to share your comments.

Cheers!


Follow

Get every new post delivered to your Inbox.

Join 174 other followers