Skip to main content

Migration Strategy β€” Planning & Execution

Quick Reference: Migration at a Glance​

QuestionAnswer
Smallest downtime?DMS Online, MI Link, Log Replay Service
Simplest method?BACPAC (offline, complete copy)
Oracle β†’ Azure?SSMA assessment β†’ DMS migration
Largest databases?MI Link (streaming), Log Replay (backup chain)
Azure SQL DB target?BACPAC, DMS, Transactional Replication
MI target?MI Link, LRS, DMS, Backup to URL
VM target?Backup/Restore, AG seeding, Log Shipping
Assessment first?Always. Use DMA (SQL→Azure) or SSMA (Oracle→Azure)

Migration Decision Framework​

Migration Decision Framework
πŸ”
Assess
Azure Migrate discovery
DMA compatibility check
SSMA for Oracle/MySQL
🎯
Choose Target
SQL Database (PaaS, no admin)
Managed Instance (near 100% compat)
SQL VM (full control)
βš–οΈ
Online vs Offline
Offline: simple, hours downtime
Online: complex, minutes downtime
πŸš€
Execute & Validate
Schema + data migration
Post-migration validation
Performance baseline

Online vs Offline Migration​

AspectOfflineOnline
DowntimeHours to daysMinutes
ComplexityLowMedium-High
Data loss riskNone (full copy)Near-zero (continuous sync)
Best forSmall DBs, dev/test, planned maintenance windowsProduction, large DBs, 24/7 availability
ToolsBACPAC, backup/restore, BCP, DMADMS, MI Link, Transactional Replication, Log Replay
🎯 Exam Focus

Exam frequently asks: "Which migration method provides minimal downtime?" β†’ Answer is always an online method (DMS online mode, MI Link, Log Replay Service, Transactional Replication). "Acceptable downtime" β†’ offline methods are simpler.

Migration Tools β€” Complete Reference​

Assessment Tools​

ToolWhat It DoesSource β†’ Target
Azure MigrateDiscovers and assesses on-prem databasesAny β†’ Azure
Data Migration Assistant (DMA)Compatibility check + feature paritySQL Server β†’ Azure SQL
Azure SQL Migration extension (Azure Data Studio)Assessment + SKU recommendation + migrationSQL Server β†’ Azure SQL DB/MI/VM
SSMA (SQL Server Migration Assistant)Schema + data conversionOracle/MySQL/Access β†’ SQL Server/Azure SQL
🏒 Real-World DBA Note

Oracle DBA parallel: SSMA is your primary tool for Oracle β†’ Azure SQL. It converts Oracle schemas, PL/SQL to T-SQL, and data types. Run the assessment first β€” it flags conversion issues (autonomous transactions, varrays, Oracle-specific types). Plan for manual fixes on ~10-20% of complex PL/SQL.

Migration Methods by Target​

Migration Methods by Target
πŸ—„οΈ
Azure SQL Database
BACPAC, DMS (online/offline), Transactional Replication, BCP
🏒
SQL Managed Instance
MI Link, Log Replay Service (LRS), DMS, Backup/Restore to URL
πŸ–₯️
SQL Server on VM
Backup/Restore, Detach/Attach, Log Shipping, Always On AG seeding

Key Migration Methods β€” Architecture​

BACPAC (Azure SQL Database)​

BACPAC Workflow
πŸ“€
Export
Source must be quiesced
Schema + data exported to .bacpac file
☁️
Upload
Upload .bacpac to Azure Blob Storage
Or export directly via SqlPackage
πŸ“₯
Import
Import via Portal, SqlPackage, or PowerShell
Creates new database on target server
βœ…
Validate
Recreate logins (not preserved)
Verify schema, data, and permissions

Key facts:

  • Contains both schema AND data (unlike DACPAC which is schema-only)
  • Offline only β€” source must be quiesced during export
  • Max import size: depends on DTU/vCore tier
  • Does NOT preserve SQL logins β€” must recreate on target

Database Migration Service (DMS)​

DMS Online Migration
πŸ“Š
Initial Full Load
Full snapshot of source database
Bulk copy to target Azure SQL
πŸ”„
Continuous CDC Sync
Change Data Capture tracks ongoing changes
Incrementally applies to target
βœ‚οΈ
Cutover
Stop application writes to source
Apply final changes, switch connection strings

Supports both online and offline modes:

  • Offline: Full copy, then switch
  • Online: Initial load + continuous Change Data Capture β†’ cutover when ready
  • Requires Azure DMS instance in the same VNet as target (for MI)
MI Link Replication
🏒
On-Prem Primary
SQL Server 2016+ remains primary
Distributed AG technology
πŸ”„
Near Real-Time Sync
Continuous log replication to MI
MI secondary is readable
βœ‚οΈ
Break Link & Cutover
MI becomes new primary
No downtime cutover

Why MI Link is powerful:

  • Near real-time replication from on-prem to MI
  • Secondary (MI) is readable during migration β€” offload reporting
  • No downtime cutover β€” when ready, break the link and MI becomes primary
  • Uses Distributed Availability Group technology
🎯 Exam Focus

MI Link is the recommended migration path for SQL Server β†’ Managed Instance when you need minimal downtime. It supports SQL Server 2016+. For SQL Server 2012/2014, use Log Replay Service or DMS instead.

Log Replay Service (LRS) β€” MI Only​

Log Replay Service (LRS)
πŸ’Ύ
Backup Chain
Full + differential + transaction log backups
From on-prem SQL Server 2012+
☁️
Upload to Blob
Upload .bak + .trn files to Azure Blob
MI reads from Blob container
πŸ”„
LRS Replays
MI applies backup files in order
Database stays in Restoring state
βœ…
Complete Restore
Autocomplete when last log applied
Database comes online on MI

Key facts:

  • Upload backup files to Azure Blob β†’ LRS replays them on MI
  • MI stays in Restoring state until you complete the restore
  • Supports autocomplete mode (finishes when last log is applied)
  • Good for SQL Server 2012+ when MI Link isn't available

Table Partitioning​

Splits a large table into smaller physical segments based on a column value (usually date).

Table Partitioning Components
πŸ“…
Partition Function
Defines boundary values (e.g., date ranges) that split data into partitions
πŸ—‚οΈ
Partition Scheme
Maps each partition to a filegroup for physical storage placement
πŸ“Š
Partitioned Table
Created on the scheme. Queries benefit from partition elimination β€” only scan relevant partitions

When to partition:

  • Tables with 100+ million rows
  • Queries consistently filter by the partition column (date ranges)
  • Archival requirements (switch out old partitions)
  • Available on: SQL DB (vCore), MI, SQL VM (not DTU Basic/Standard)

Architecture components:

  1. Partition Function β€” defines boundary values (e.g., date ranges)
  2. Partition Scheme β€” maps partitions to filegroups
  3. Partitioned Table β€” created on the partition scheme
🎯 Exam Focus

Partition elimination is the key performance benefit β€” SQL Server only scans partitions that match the query filter. If the exam asks "how to improve query performance on a 500M row table filtered by date" β†’ table partitioning.

Data Compression​

Reduces storage size and improves I/O performance (fewer pages to read).

TypeCompressionPerformance ImpactBest For
Row~15-25%Minimal CPU overheadOLTP tables with many NULLs/short values
Page~60-75%Moderate CPU overheadRead-heavy/archive tables
Columnstore~90%+Significant savingsAnalytics/data warehouse

Architecture insight: Compression works at the page level. When SQL Server reads a compressed page, it decompresses in memory. The tradeoff: less I/O (huge win) vs more CPU (usually acceptable).

🎯 Exam Focus

Key exam fact: Use sp_estimate_data_compression_savings to estimate savings before applying. Page compression gives the best ratio for most workloads. Columnstore compression is for analytics tables only.

SQL Data Sync​

Bi-directional sync between Azure SQL databases OR between Azure SQL and on-prem SQL Server.

SQL Data Sync β€” Hub-Member Topology
πŸ›οΈ
Hub Database (Azure SQL)
Central database that coordinates sync. Always an Azure SQL Database.
πŸ”—
Member Databases
Azure SQL DBs or on-prem SQL Servers (via sync agent). Bi-directional sync.

Use cases: Distributed data distribution, hybrid sync. NOT for migration β€” for ongoing synchronization.

Post-Migration Validation Checklist​

CheckWhat to Verify
SchemaAll objects migrated (tables, views, SPs, functions, triggers)
DataRow counts match, checksums validate, sample data spot-check
PerformanceQuery plans are good, no regressions, update statistics
SecurityLogins recreated, permissions correct, TDE enabled
ConnectivityApp connection strings updated, firewall/PE configured
CompatibilityALTER DATABASE SET COMPATIBILITY_LEVEL matches source if needed

Tool Selection: "When NOT to Use This Tool"​

ToolDo NOT use when...
BACPACDB > 200 GB (slow export), need minimal downtime, target is MI (use backup/restore instead)
DMS OfflineZero downtime required β€” use DMS Online mode instead
DMS OnlineSource is Oracle β€” use SSMA + DMS, not DMS alone
MI LinkSource is SQL 2012/2014 β€” requires SQL 2016+. Use LRS instead.
Log Replay ServiceNeed read access during migration β€” MI stays in Restoring mode until cutover
SSMASource is SQL Server (not Oracle/MySQL) β€” use DMA instead
Backup/Restore to URLTarget is Azure SQL DB (doesn't support native restore) β€” use BACPAC or DMS

Migration Anti-Patterns​

Anti-PatternWhy It FailsWhat to Do Instead
Skip assessmentDiscover blocking issues in production cutoverAlways run DMA/SSMA assessment first
Test in dev, deploy in prodPerformance differs at scaleTest with production-size data and workload
Migrate schema and data separately with manual scriptsError-prone, inconsistentUse DMS or BACPAC for atomic migration
Ignore compatibility levelNew optimizer behavior causes plan regressionsKeep source compat level, upgrade gradually with Query Store
Migrate on Friday eveningWeekend outage if problems ariseMigrate early in the week with rollback window
No rollback planStuck if migration failsKeep source running until validation complete
Single massive BACPACTimeout on large databasesSplit into smaller databases or use online migration

Common Post-Migration Mistakes​

MistakeImpactFix
Forgetting to update statisticsQuery optimizer uses stale stats β†’ bad plansRun sp_updatestats or full statistics update after migration
Not rebuilding indexesFragmented indexes from bulk insert during migrationRun index rebuild on all tables post-migration
Leaving old compatibility level foreverMiss IQP improvements (batch mode, adaptive joins)Upgrade compat gradually with Query Store monitoring
Not testing under loadWorks in test, fails at production concurrencyLoad test with realistic concurrent users before cutover
Ignoring login/user mappingCREATE USER on target but orphaned SIDsUse ALTER USER ... WITH LOGIN to fix orphaned users
No performance baselineCan't tell if post-migration performance is normalCapture Query Store baseline before and after migration
🎯 Exam Focus

Common exam trap: "After migrating to Azure SQL DB, queries are slower than on-prem." The answer almost always involves: 1) Update statistics, 2) Check compatibility level, 3) Review Query Store for plan regressions, 4) Verify correct service tier (not under-provisioned).

Oracle β†’ Azure Migration Mindset​

If you're coming from Oracle, these mental shifts help during migration planning:

Oracle ThinkingAzure SQL Reality
"I'll export with Data Pump"Use SSMA for schema conversion, DMS for data migration
"I need RAC for HA"Business Critical tier gives you 4-node AG automatically
"Data Guard handles DR"Failover Groups with auto-failover and single endpoint
"I'll tune the instance (SGA, PGA)"You tune the workload (queries, indexes, tier selection)
"I need to manage backups"Backups are automatic (PaaS). You just set retention.
"PL/SQL stored procedures"Convert to T-SQL via SSMA. Plan for 10-20% manual fixes.
"Oracle Wallet for encryption"TDE is ON by default. Key Vault for BYOK.
"I'll set up the OS and install SQL"Choose PaaS (SQL DB/MI) unless you need full OS access

Flashcards​

Online vs Offline migration β€” key difference?
Click to reveal answer
Offline = downtime required (BACPAC, backup/restore). Online = near-zero downtime (DMS online, MI Link, Log Replay Service, Transactional Replication).
1 / 9

Quiz​

Q1/4
0 correct
A company needs to migrate a 500 GB SQL Server database to Azure SQL MI with minimal downtime. Which method is recommended?