Home/Automate MySQL Database Backup to Amazon AWS S3 (using PHP)

Automate MySQL Database Backup to Amazon AWS S3 (using PHP)

Published On: 18 August 2015.By .
  • Product & platform Engineering

Databases.

A database is an organized collection of data. It is the collection of schemas, tables, queries, reports, views and other objects. The data is typically organized to model aspects of reality in a way that supports processes requiring information, such as modelling the availability of rooms in hotels in a way that supports finding a hotel with vacancies. – Wikipedia

Generally, databases are frequently updated. Dynamic nature of databases makes it inefficient to take manual backups regularly. Sure you can create a mysqldump on the server itself with a simple script, but what if the server crashes?

To deal with this issue, you can create a backup of database and upload it to a remote (and reliable) server regularly – using script. We’ll use AWS S3 for this tutorial as it’s reliable and easy to set up. Let’s start with the backup function first.

function LetsTakeBackup(){
    
    $filename = time() . '_' . 'db-backup.sql.bz2';
    $backup_directory = 'path/to/backup/directory';
    
    $cmd = 'mysqldump -h ' . DB_HOSTNAME . ' -u ' . DB_USERNAME . ' -p ' . DB_PASSWORD . ' ' . DATABASE_NAME . ' | bzip2 > ' . $backup_directory . $filename;

    exec($cmd);
    
}

There. Database backup file is generated and saved in the specified directory on the server itself. Do note that filename is prefixed with current timestamp using PHP time() function. We’ll use this timestamp to delete backup files older than a certain time period. Now we just need to move the file to AWS S3 server.

Get AWS SDK

Download AWS SDK for PHP from Amazon and include it in your project: http://docs.aws.amazon.com/aws-sdk-php/v3/guide/getting-started/installation.html

The PHP Class

Here’s a small piece of code, and that’s all you need to get the script up and running. Make modifications as per your requirements.

require_once( '/path/to/aws-autoloader.php' );
use Aws\Common\Aws;

// AWS key
define( 'AWS_KEY', 'YOUR AWS ACCOUNT KEY' );

// AWS secret
define( 'AWS_SECRET', 'YOUR AWS ACCOUNT SECRET KEY' );

// AWS bucket to use (bucket is created if it doesn't exist)
define( 'BUCKET', 'BUCKET NAME' );

// backup limit
define( 'BACKUP_LIMIT', 'BACKUP TIME LIMIT' ); // eg. -1 day, -2 weeks, -1 month

// file size/upload limit in bytes
define( 'FILE_SIZE_LIMIT', 'MAX FILE SIZE IN BYTES' );

class UploadToAWS
{
	function movefile( $fileName, $filePath ) {
		
		// directory path of backup files
		define( 'DB_DIRECTORY', $filePath . '/' );

		// source file to upload
		define( 'SOURCE_FILE', $fileName );

		$this->check_file_exists( SOURCE_FILE );

		$aws = Aws::factory( array( 'credentials' => array( 'key' => AWS_KEY, 'secret' => AWS_SECRET ) ) );
		$s3 = $aws->get('S3');

		$this->check_bucket_exists( $s3 );

		$this->upload_file( $s3, SOURCE_FILE );

		$files = $this->get_files_in_bucket( $s3 );

		if ( !empty( $files ) ) {
			$this->delete_files( $s3, $files );
		}

	}

	// check if db backup file exists
	private function check_file_exists( $file = null ) {
		if ( !file_exists( DB_DIRECTORY . @$file ) || empty( $file ) ) {
			die( 'ERROR: File not found' );
		}
	    if ( is_dir( $file ) == true ) {
	        die( 'ERROR: Please select a valid file.' );
	    }
		$filesize = filesize( DB_DIRECTORY . $file );
	    if ( @$filesize > FILE_SIZE_LIMIT ) {
	        die( 'ERROR: File too large.' );
	    }
	}

	// check if bucket exists, if not then create one
	private function check_bucket_exists( &$s3 ) {
		$bucketexists = false;

		$buckets = $s3->listBuckets();

		foreach ( $buckets['Buckets'] as $bucket ) {
			if ( $bucket['Name'] == BUCKET ) {
				$bucketexists = true;
			}
		}
		if ( $bucketexists == false ) {
			$s3->createBucket( array( 'Bucket' => BUCKET ) );
		}

		$s3->waitUntil( 'BucketExists', array( 'Bucket' => BUCKET ) );
		if ( $bucketexists == false ) {
			// bucket created action
		}
	}

	// upload file in parts
	private function upload_file( &$s3, $source_file ) {
		$files = $this->get_files_in_bucket( $s3 );
		if ( !empty( $files ) ) {
			foreach ( $files as $file ) {
				if ( $file == $source_file ) {
					// file with same name already exists
					return false;
				}
			}
		}
		$filename = $source_file;
		$filesize = filesize( DB_DIRECTORY . $source_file ) . ' Bytes'; 

		// Create a new multipart upload and get the upload ID.
		$fileupload = $s3->createMultipartUpload( array(
			'Bucket'       => BUCKET,
			'Key'          => $filename,
			'StorageClass' => 'REDUCED_REDUNDANCY',
			)
		);
		$uploadId = $fileupload['UploadId'];

		// Upload the file in parts.
		try {
			$file = fopen( DB_DIRECTORY . $source_file, 'r' );
			$parts = array();
			$partNumber = 1;

			while ( !feof( $file ) ) {
			$fileupload = $s3->uploadPart( array(
				'Bucket' => BUCKET,
				'Key' => $filename,
				'UploadId' => $uploadId,
				'PartNumber' => $partNumber,
				'Body' => fread( $file, 5 * 1024 * 1024 )
				)
			);
			$parts[] = array(
				'PartNumber' => $partNumber++,
				'ETag' => $fileupload['ETag']
				);
			}
			fclose( $file );
		} catch ( S3Exception $e ) {
			$fileupload = $s3->abortMultipartUpload( array(
				'Bucket'   => BUCKET,
				'Key'      => $filename,
				'UploadId' => $uploadId
				)
			);
		}

		// Complete multipart upload.
		$fileupload = $s3->completeMultipartUpload( array(
			'Bucket'   => BUCKET,
			'Key'      => $filename,
			'UploadId' => $uploadId,
			'Parts'    => $parts
			)
		);
	}

	private function get_files_in_bucket( &$s3 ) {
		$files = '';
		$iterator = $s3->getIterator( 'ListObjects', array( 'Bucket' => BUCKET ) );
		foreach ( $iterator as $object ) {
			$files[] = $object['Key'];
		}
		return $files;
	}

	private function delete_files( &$s3, $files ) {
		foreach ( $files as $key=>$file ) {
			$dbtime = substr( $file, 0, ( strripos( $file, '_' ) ) );
			if ( $dbtime < strtotime( BACKUP_LIMIT, time() ) ) {
				unset( $files[$key] );
				if ( sizeof($files) < 2 ) {
					// number of backup files is less than 2, deletion stopped.
					return false;
				}
				$deletefile = $s3->deleteObject( array(
					'Bucket' => BUCKET,
					'Key' => $file
					)
				);
			}
		}
	}
}

This class is used for:

  • Check if the file you’re trying to upload is valid and not too large.
  • Check if AWS Bucket exists, if not then create one. Buckets on AWS S3 are like directories.
  • Get all files list in the bucket, use timestamp prefix to delete files older than specified limit. This is important to keep total size of backups on S3 in control.
  • And of course, upload the file to AWS S3 using multipart.

You can also create a function to delete local backup file after upload is completed successfully.

Instantiate UploadToAWS

$AWS = new UploadToAWS();
$AWS->movefile( $fileName, $filePath );

That’s it. All there’s left to do is create a cron job to schedule the task at regular intervals and sleep peacefully. If you have any questions, shoot them in comments section below.

Related content

We Love Conversations

Say Hello
Go to Top