The Cloud is amazing! The Cloud is easy. Why wouldn’t we move to the Cloud?

Unfortunately, migrating to the Cloud isn’t as simple as it often sounds. Cloud migration coincides with numerous uncertainties about costs, ongoing expenditure, security and network architecture. Suddenly you are uneasy... Wouldn’t it be so much safer to find a host, on-premise?

This is a common example of many IT departments when senior management mention “Cloud” as a solution for infrastructure resource problems (time, cost and people). It’s either too good to be true, too woolly, too complicated to manage and maintain, or too insecure. Before you know it, the Cloud dream is over.

When we decided to embark upon our very own Cloud journey, we experienced much of the same. However, as the determined bunch we are, we decided to explore some of the truths behind ‘Cloud’ migration. Here’s how our journey began.

Pre-Cloud transition: Selecting a Cloud provider, initial sizing and cost projections

There are a few companies offering Infrastructure as a Service (IaaS) on Cloud, including Microsoft Azure and Amazon AWS. We went with Amazon purely because we were already using Amazon in our development teams to spin up HANA instances on the fly for basic testing.

We began the process with a costing exercise to see how much it would be to host our BW on HANA infrastructure on Cloud. We needed this to work out our monthly budget. So, what did our on-premise infrastructure look like? It was made up of a BW 7.4 application server, a 3-node HANA database (500GB), and an NLS server. We presumed we could safely go with a similar infrastructure on the Cloud. However, we discovered tools and properties which would help us become more creative with our initial sizing and infrastructure design.

AWS for HANA

AWS offers the following HANA Cloud offerings, out of which only two support a BW scenario:

  • SAP HANA BYOL (Bring your own License) - Supports a BW scenario
  • SAP HANA Trial Systems - Supports a BW scenario for 1 user free, except for the underlying AWS infrastructure, for a limited trial period
  • SAP HANA One
  • SAP HANA Developer Edition

Price models

We selected the SAP HANA BYOL model and the On-demand purchasing option for non-production systems which relied on the number of hours the systems were online and in use. Little did we know that we were to be presented with many more options, each needing careful consideration.

We soon found out Amazon has made things easier by providing a beta cost estimator but, yet again, there was slightly more to it. Although there were several components and options to choose from to effectively make a cost ‘guesstimate’, the total cost was dependent on the following factors:

  • Instance types
  • Number of nodes
  • Type & Volume of Disk storage
  • Access time
  • Data transfer to and from Amazon
  • Elastic IPs and backups

Moreover, assumptions had to be made on the general nature and type of usage. As part of our commitment to Cloud, it was accepted that:

a. Any costing exercise done at this stage was going to be our best guess until we had run for a couple of months to compare actuals versus estimates.

b. While the infrastructure was being built, the cost would be reflective of build charges rather than actual usage charges.

Here is a brief description of options available under each category and what worked best for us:

Instance types

EC2 virtual machines are known as instances. While there were many Instance types on offer, only a few of them were supported for production.
EC2 instance type

SAP HANA server and netweaver application server

Not being in production we had a wider and more cost-effective selection. Moreover, the CPU type determined the cost per hour. Because we had opted for the pricing model based on usage, we were looking at the lower end of this range.

pricing model of CPU

As a pleasant surprise we found out later the claim about having the ability to scale up on the CPU types, should we have the need to, was indeed true. This also meant we could increase our capacity on a need-only basis and scale down as necessary.

For a database size of 250GB, a 2 node r3.2xlarge would have suited us best, however, we decided to make do with a 3 node r3.xlarge to shoehorn our infrastructure into an acceptable monthly budget.

Storage types

General purpose SSD storage (gp2) was the most suitable for our needs. The "Instance Store" provided with each instance is only suitable for temporary data, as the data in the "Instance Store" persists only during the lifetime of the instance. Once the instance is stopped, the "Instance Store" data is lost.`

Instance storage is a disk attached directly to the hardware

Data transfer and access considerations

While data transfer considerations become important when transferring data out of Amazon EC2, transferring into Amazon EC2 was relatively cheap. The 300GB of data we had to transfer from on-premise to the Amazon instances resulted in an instance usage charge but not an additional data transfer charge.

Making the transition: OS/DB Migration

While the paper-based exercise for costing and budgeting was being completed, we needed to get data off our Power-8 server. As this was a transfer from the Big Endian, Linux on Power system, to the Little Endian x86 based Linux, we were forced to go down the OS/DB migration route.

To make the transition to Cloud effortless, we decided to use another loan infrastructure to complete the migration on-premise. This would ensure the exports were valid and useable before we embarked on the activities necessary to migrate the instances on the Cloud. It would also help us separate the two important streams of re-platforming and migrating to Cloud. We used our BW instances for proof of concepts and demos.

Here are some of the challenges we encountered on the way

I mention here some “Gotchas” you might want to bear in mind or investigate when taking on a heterogeneous system migration. ALWAYS refer to the heterogenous system copy guide for your system:

Issue #1: SIMGRCREATEDDL ended with a warning.
When executing SMIGRCREATEDDL which generates the .sql files containing DDL statements for non-standard ABAP database objects (mainly BW objects), we got an “internal error in CreateConcatfile”.

SAP system handling of non-standard ABAP objects

Generate DDL statements for migration

Report generate DDL statements for migration

The note below implies that this internal error can be ignored but not if it’s a large BW system.

2262774 - Internal Error in CREATECONCATFILE

Cause

This is an internal error that is only relevant if it is an XXL migration.

Resolution This error can be ignored as per SAP Note 2087001. This will be displayed explicitly on the output window as from SAP NW BW 7.50.

Keywords Error when running the report SMIGR_CREATE_DDL

Issue #2 : HDB_TABLE_CLASSIFICATION.TXT did not exist. SIMGR_CREATE_DDL did not create HDB_TABLE_CLASSIFICATION

SAP system report execution result check

Inserted the HDB_table_classification.txt from a note.

Note #3: Use HDB_ESTIMATES to pre-empt which tables need splitting

Use HDB estimates

Pre-empt which tables need splitting

Netweaver support release

General SAP system parameters

Table splitting in SAP

Extract and use SPLIT.SAR file and Generate table splitting input file as mentioned in the copy guide.

Issue #4: DSO tables with more than 2 million rows present a real problem while export.

After analysing the logfiles we found the following error:
trying to determine selectiveness of key column REQUEST SELECT COUNT (DISTINCT REQUEST) FROM /BIC/AZEPMDS0200

We got around this problem by generating an appropriate R3ta_hints.txt file as follows but not without a few trial and errors.

Software provisioning

Software provisioning R3ta hits file

tail R3ta__BIC_AZEPMDS0200.log

SAP HANA

Execution of SAP NetWeaver

Issue #5: Tables while Importing.

Like with everything else, there were a few discoveries and challenges en-route:

  • Column tables got exported as row tables, especially our massive DSO tables.

Column tables exported as row tables

  • This resulted in massive memory utilisation and subsequent crashes.
  • HANA went on a massive recovery spree, after which we unloaded the table and truncated it. We had to restart import and got around this time by changing the table definitions in the .SQL file.
  • Eventually, the OS/DB migration was successful and the BW instance was up and available.

As you can see, the move to the Cloud is not as straightforward as it sounds. Keep your eyes peeled for our next blog in the series where I will discuss how to prepare and operate the AWS platform, computer resources, IP addresses, Instances, EBS, Storage, CPU and backup options.