Migrating existing AWS accounts to the new set up
If you have existing AWS accounts in your Netskope tenant that use the old set up where DLP and malware scans are performed using CloudTrail events, then you can migrate them to the new set up which uses CloudWatch events to efficiently scan your resources for DLP violations and malware.
You can use one of the following methods to migrate your accounts.
Bulk migrate using migration_script.py
You can run the script, migration_script.py
to migrate several accounts at the same time. The script requires you to create a CSV file with the following information,
<instance_name>,<api_token>,<tenant_url>,<aws_access_key_id>,<aws_secret_key>,<aws_region> <instance_name>,<api_token>,<tenant_url>,<aws_access_key_id>,<aws_secret_key>,<aws_region>
where,
<instance_name>
is the name of the AWS account instance in the Netskope tenant.
<api_token>
is the REST API token from the Netskope tenant under Settings > Tools > REST API.
<tenant_url>
is the Netskope tenant URL.
<aws_access_key_id>
and <aws_secret_key>
are the keys from the AWS account.
<aws_region>
is the AWS region in which you want the script to upload the migration CFT, aws-instance-setup.yml
. For information about the steps performed by the script, see About migration_script.py
.
About migration_script.py
This script, migration_script.py
performs the following steps.
Reads the CSV file and verifies that the instance for the AWS account exists in the specified Netskope tenant.
Uses the API token to call the REST API and download the migration CFT,
aws-instance-setup.yml
. The template is customized based on the Netskope Public Cloud services enabled for the account.Uses the provided AWS access key and secret key to connect to the AWS account and creates a stack called netskopeCAR in the specified AWS region.
Uploads
aws-instance-setup.yml
to netskopeCAR stack. When the stack creation is complete, the AWS accounts have been migrated to the new set up.
For information on aws-instance-setup.yml
and the background process, see "What happens in the process?" sections in the following topics.
Run migration_script.py
To run migration_script.py
,
Create a file called
migration_script.py
and copy the following script into the file.import argparse import requests import time import csv import json import boto3 import random from botocore.exceptions import ClientError import concurrent.futures from datetime import datetime STACK_UP_TO_DATE = 'No updates are to be performed' STACK_NOT_EXIST = 'does not exist' MAX_RETRIES = 8 MIN_STACK_WAIT = 1 MAX_STACK_WAIT = 10 CLOUDFORMATION_SERVICE = 'cloudformation' STACK_NAME = 'netskopeCAR' STACK_SUCC_STATES = ['CREATE_COMPLETE','UPDATE_COMPLETE'] IN_PROGRESS = 'IN_PROGRESS' ROLLED_BACK = 'ROLLBACK_COMPLETE' FAILED = 'FAILED' STACKS = 'Stacks' STACK_STATUS = 'StackStatus' STACK_ID = 'StackId' STACK_STATUS_REASON = 'StackStatusReason' APP = "aws" PATH = "/api/v1/public_cloud/account" def get_instance(hostname, app, instance_name, token): """ Get instance details :param hostname: tenant host :param app: app name 'aws' :param instance_name: name of the instance :param token: REST API token :return: instance details or Exception """ print("Getting info for {}".format(instance_name)) params = { "app": app, "instance_name": instance_name, "token": token } headers = { "Content-Type" : "application/json", "Accept" : "application/json" } get_response = requests.get("https://{}/api/v1/introspection_instance".format(hostname), headers=headers, params=params, verify=False) response_data = {} json_output = json.loads(get_response.text) if json_output.get("status") == "error": raise Exception('Errors: {}'.format(json_output.get("errors"))) response_data["use_for"] = json_output["data"]["instance"]["use_for"] response_data["admin_email"] = json_output["data"]["instance"]["admin_email"] if "securityscan" in response_data["use_for"]: response_data["securityscan_interval"] = json_output["data"]["instance"]["securityscan_interval"] print("Returning info for name {}: {}".format(instance_name, response_data)) return response_data def download_cft(hostname, token, app, instance_name, admin_email, usefor, securityscan_interval): """ call the download of the migration CFT for the instance :param hostname: tenant host :param token: REST API token :param app: app name 'aws' :param instance_name: instance name :param admin_email: admin email for the instance :param usefor: services used by the instance :param securityscan_interval: security scan interval :return: CFT content or Exception """ print("Downloading Migration Template for {}".format(instance_name)) qparams = {"token":token, "op":"download"} headers = { "Content-Type" : "application/json" } data = { "app": app, "type": "cft", "use_for": usefor, "mode" : "migrate", "admin_email" : admin_email, "instance_name": instance_name, "securityscan_interval": securityscan_interval } res = requests.post("https://{}{}".format(hostname, PATH), headers=headers, params=qparams, data=json.dumps(data), verify=False) response = res.text if response.startswith("{"): json_output = json.loads(response) if json_output.get("status") == "error": raise Exception('For instance "{}" errors received: {}'.format(instance_name, json_output.get("errors"))) return response def create_resource(instance_name, region, access_key, secret_key, template): """ Call stack creation and wait the stack creation to be completed :param instance_name: name of the instance :param region: region in which the stack will be created :param access_key: access key id for target aws account :param secret_key: secret key for the target aws account :param template: CFT template :return: Exception if encountered """ try: print("creating stacks....") client = boto3.client(CLOUDFORMATION_SERVICE, aws_access_key_id=access_key, aws_secret_access_key=secret_key, region_name=region) if not stack_exists(client, STACK_NAME, region): print("stack does not exist, creating stack") response = client.create_stack( StackName=STACK_NAME, TemplateBody=template, Capabilities=['CAPABILITY_NAMED_IAM'] ) now = datetime.now() start_time = datetime.timestamp(now) else: raise Exception("Stack already present for instance: {}".format(instance_name)) print("Waiting Stack creation to be completed for {}".format(instance_name)) while True: time.sleep(random.randint(MIN_STACK_WAIT, MAX_STACK_WAIT)) stacks = client.describe_stacks(StackName=response[STACK_ID]) stack_status = stacks[STACKS][0][STACK_STATUS] if stack_status in STACK_SUCC_STATES: print("Stack creation for {} was successful".format(instance_name)) break if IN_PROGRESS not in stack_status: if stack_status == ROLLED_BACK or FAILED in stack_status: reason = stacks[STACKS][0].get(STACK_STATUS_REASON) if reason: raise Exception("Exception received while creating stack {} for instance : {} , {}".format(STACK_NAME, instance_name, reason)) else: raise current_time = datetime.timestamp(datetime.now()) if start_time + 600 < current_time: raise Exception("Stack creation for {} took longer than expected, please check the AWS console".format(instance_name)) except ClientError as exc: if STACK_UP_TO_DATE not in str(exc): raise else: print("{} Stack {} already up to date and no updates are to be performed".format(instance_name, STACK_NAME)) except Exception as exc: raise def stack_exists(client, name, region): """ Check if the stack with the provided name is already present :param client: client :param name: name of the stack :param region: region in which the stack is to be deployed :return: true if stack exists """ try: while True: stacks = client.describe_stacks(StackName=name) stack_status = stacks[STACKS][0][STACK_STATUS] if 'IN_PROGRESS' in stack_status: time.sleep(random.randint(MIN_STACK_WAIT, MAX_STACK_WAIT)) elif ROLLED_BACK == stack_status or FAILED in stack_status: print("Stack in {} state. Stack cannot be recovered, triggering\ deleting stack in region : {}".format(stack_status, region)) client.delete_stack(StackName=STACK_NAME) time.sleep(random.randint(MIN_STACK_WAIT, MAX_STACK_WAIT)) else: return True except ClientError as exc: if STACK_NOT_EXIST in str(exc): return False else: raise def start_migration(row): """ Caller method for gathering the instance details call methods for Getting instance details Download migration CFT call creation of stack :param row: single row containing the instance details :return: """ details = { "instance_name": row[0], "apitoken": row[1], "tenant_url": row[2], "access_key_id" : row[3], "secret_key": row[4], "region": row[5] } hostname = details["tenant_url"] instance_name = details["instance_name"] apitoken = details["apitoken"] region = details["region"] access_key_id = details["access_key_id"] secret_key = details["secret_key"] try: response_data = get_instance(hostname, APP, instance_name, apitoken) use_for = response_data["use_for"] admin_email = response_data["admin_email"] securityscan_interval = response_data.get("securityscan_interval") print("{}, {}".format(use_for, admin_email)) cft_yaml_body = download_cft(hostname, apitoken, APP, instance_name, admin_email, use_for, securityscan_interval) create_resource(instance_name, region, access_key_id, secret_key, cft_yaml_body) print('Migration triggered successfully for : {}'.format(instance_name)) except Exception as exc: return "For {} Exception received : {}".format(instance_name, str(exc)) if __name__ == "__main__": """ Customer will create and provide location of the csv file from which each instance migration will be triggered. Migration contains the following parts 1-> get instance details for the instance 2-> download migration CFT for the instance 3-> trigger create of stack with the downloaded CFT 4-> Wait for the stack creation to be completed 5-> post success or failure of the migration """ PARSER = argparse.ArgumentParser() PARSER.add_argument("--file", help="csv file", required=True) ARGS = PARSER.parse_args() file_path = ARGS.file results = [] if str(file_path).endswith(".csv"): with open(file_path) as f: reader = csv.reader(f) with concurrent.futures.ThreadPoolExecutor(max_workers=200) as executor: futures = [executor.submit(start_migration, row) for row in reader] for idx, future in enumerate(concurrent.futures.as_completed(futures)): try: res = future.result(timeout=600) results.append(res) except concurrent.futures._base.TimeoutError as exc: print("timeout_error {} {}".format(str(exc), future)) print("FAILED RESULTS : {}".format(results)) else: print("Expected a csv file and file path: {} is not a csv file".format(file_path))
Open a CLI and at the prompt run,
python migration_script.py --file <path-to-csv>
Manually migrate each account using aws-instance-setup.yml
You can migrate one account at a time by downloading aws-instance-setup.yml
and uploading it to a new CloudFormation stack in the AWS account.
To download aws-instance-setup.yml
call the following REST API.
https://<tenant-name>.goskope.com/api/v1/public_cloud/account?token=<token>&op=download
For more information on REST API endpoints see, Public Cloud API Endpoints for REST API v1.
If you want to download aws-instance-setup.yml
through the Netskope tenant,
Go to Settings > API-enabled Protection > IaaS page, click on the AWS account to view the edit screen.
Click Start Migration and download the CFT,
aws-instance-setup.yml
from the migrate account screen.
After you've downloaded aws-instance-setup.yml
, you must create a new CloudFormation stack in the AWS account and upload the template. For detailed instructions on creating a stack and uploading the template, see the following topics.