*Cube-Host– full cloud services!!
Migrating a MySQL database is a high-impact operation: it affects data integrity, application uptime, and rollback safety. This guide shows a proven migration workflow using mysqldump (most universal) and explains how to reduce downtime using a “final sync” or replication approach.
MySQL is most commonly hosted on a Linux VPS, but migrations can involve applications on a Windows VPS too. Either way, stable network and disk performance from your VPS hosting provider makes migrations faster and safer.
sql_mode, lower_case_table_names, time zone, and innodb settings.Quick inventory commands (run on the source server):
# MySQL version
mysql -V
# Server variables that often matter during migration
mysql -u root -p -e "SHOW VARIABLES LIKE 'version%'; SHOW VARIABLES LIKE 'sql_mode'; SHOW VARIABLES LIKE 'character_set_server'; SHOW VARIABLES LIKE 'collation_server'; SHOW VARIABLES LIKE 'default_authentication_plugin';"
# Database sizes
mysql -u root -p -e "SELECT table_schema AS db, ROUND(SUM(data_length+index_length)/1024/1024,2) AS size_mb FROM information_schema.tables GROUP BY table_schema ORDER BY size_mb DESC;"
Best for small/medium databases and the safest option across versions. You export SQL (schema + data) and import it on the new server. It is slower than physical methods for very large datasets, but it is predictable and easy to rollback.
You set up the new server as a replica, let it catch up, then switch application traffic. This can reduce downtime to minutes, but requires more steps and careful validation.
Tools like hot backups are fastest for large InnoDB datasets, but they are more sensitive to version and configuration. Use only if you already operate such tooling confidently.
For InnoDB, use --single-transaction to avoid locking tables. Also include routines, triggers, and events so application logic isn’t lost.
# Example: dump ONE database with compression
# Use -p to enter password interactively (safer than putting it in the command line)
mysqldump -u root -p \
--single-transaction --quick \
--routines --triggers --events \
--hex-blob \
--default-character-set=utf8mb4 \
olddb | gzip > olddb-$(date +%F).sql.gz
If you have MyISAM tables: --single-transaction doesn’t fully protect consistency for MyISAM. Consider scheduling downtime or using a read lock strategy during dump (because MyISAM is not transactional).
Optional: dump multiple databases (explicit list):
mysqldump -u root -p \
--single-transaction --quick \
--routines --triggers --events \
--databases db1 db2 db3 | gzip > dbs-$(date +%F).sql.gz
Use SCP or rsync over SSH. Verify integrity with a checksum.
# Create checksum on source
sha256sum olddb-2025-08-01.sql.gz > olddb-2025-08-01.sql.gz.sha256
# Transfer to destination
scp olddb-2025-08-01.sql.gz olddb-2025-08-01.sql.gz.sha256 user@NEW_SERVER_IP:/tmp/
On destination, verify:
cd /tmp
sha256sum -c olddb-2025-08-01.sql.gz.sha256
Match the character set and collation to what your application expects. If you are unsure, check the source database defaults first.
# Check source defaults
mysql -u root -p -e "SELECT DEFAULT_CHARACTER_SET_NAME, DEFAULT_COLLATION_NAME FROM information_schema.SCHEMATA WHERE SCHEMA_NAME='olddb';"
Create on destination:
mysql -u root -p -e "CREATE DATABASE olddb CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;"
# Import compressed dump
gunzip -c /tmp/olddb-2025-08-01.sql.gz | mysql -u root -p olddb
Tip for large imports: run inside a screen/tmux session and make sure the destination has enough disk space for logs and temporary files.
Database objects (tables) are not enough — your application user accounts and grants must be recreated.
On the source server, list non-system users:
mysql -u root -p -e "SELECT user, host FROM mysql.user WHERE user NOT IN ('mysql.sys','mysql.session','mysql.infoschema') ORDER BY user, host;"
For each application user (example: appuser), export definitions:
mysql -u root -p -e "SHOW CREATE USER 'appuser'@'%';"
mysql -u root -p -e "SHOW GRANTS FOR 'appuser'@'%';"
Then recreate them on the destination. Example (adjust plugin/password to your environment):
# Example: create user and grant permissions
mysql -u root -p -e "CREATE USER 'appuser'@'%' IDENTIFIED BY 'STRONG_PASSWORD_HERE';"
mysql -u root -p -e "GRANT ALL PRIVILEGES ON olddb.* TO 'appuser'@'%'; FLUSH PRIVILEGES;"
Compatibility note: MySQL 8 defaults to caching_sha2_password. Some older clients require mysql_native_password. If your application is legacy, plan this before cutover (but use modern auth when possible).
At minimum, validate that schema exists, row counts look reasonable, and key queries work.
# Compare table counts by schema
mysql -u root -p -e "SELECT table_name, table_rows FROM information_schema.tables WHERE table_schema='olddb' ORDER BY table_rows DESC LIMIT 20;"
For deeper validation (recommended for critical systems):
If you are not using replication, the most common “safe” approach is a short maintenance window:
A secure MySQL database migration is all about consistency (correct dump options), security (safe transfer + restricted access), and validation (checks before cutover). Most teams run MySQL on a Linux VPS, but migrations often involve applications on a Windows VPS. With stable VPS hosting resources and a clean plan, migrations become predictable and repeatable.