/
AWS Client Stack Deployment

AWS Client Stack Deployment

Note: The information on this page applies to Content Mover and Business Continuity.

CAM AWS Stack utilizes Amazon AWS services and resources to provide the ability to move documents and content and sync metadata between multiple Document Management Systems (DMS). CAM Content Mover and Data Sync (Content Sync) use S3 storage blobs as an intermediary place while moving content between the supported Document Management Systems.

To deploy the CAM AWS Stack, you need to deploy the AWS template into your environment. The template deploys a complete solution that contains multiple resources such as function apps, MySQL instances, Blob Storages, and others. These resources are provided and hosted by AWS in your AWS account to provide more control over moving content.

AWS Stack Creation

Create SQS Queues

Create 2 SQS queues manually as follows:

  1. content sync-prod-job-process-v1

Description

Description

Standard Queue

Receive Message Wait Time: 0 seconds

Message Retention Period: 14 days

Maximum Message Size: 256 (kB)

Don't use redrive policy

Don't use SSE (Server-Side Encryption)

Delivery Delay: 0 seconds

Default Visibility Timeout: 16 minutes

2. contentsync-prod-etl-process-v1

Description

Description

Standard Queue

Receive Message Wait Time: 0 seconds

Message Retention Period: 14 days

Maximum Message Size: 256 (kB)

Don't use redrive policy

Don't use SSE (Server-Side Encryption)

Delivery Delay: 0 seconds

Default Visibility Timeout: 16 minutes

Create S3 Buckets

Create 2 buckets manually:

content-sync-configuration-$subdomain - This bucket will use to share the AWS setup script to the user.

$subdomain-prosperoware-io-encrypted-bucket - This bucket will use to store the client's content. Set default encryption to AES-256

Note: $subdomain Specify your subdomain name.

Create the VPC

  • Litera will share the scripts to create VPC.

  • Download the scripts will be available at content-sync-configuration-$subdomain/vpc-configuration/.

  • Configure AWS CLI. Follow instructions provided at AWS: AWS CLI.

  • Execute below command on the terminal to create VPC:

  • sh deploy.script create-stack --region us-east-1

Note: --region Specify which AWS Region to send this command's AWS request to.

Create the Lambda Security Group

  1. Head over to VPC

  2. Under Security Group click Create Security Group:

  • Set the security group name to Lambda-SG

  • Select the VPC created from the previous step.

  • Add a rule for inbound and outbound as All Trafic and Source 0.0.0.0/0 or Destination as 0.0.0.0/0

Upload the shared content to the S3 bucket

Update the below 2 files

  1. Open params-prod.yml available at S3Content/serverless/params/us-east-1

    • LambdaRole (update the AWSACCOUNTID to your AWSACCOUNTID)

    • SecurityGroup (Lambda-SG created in the previous step)

    • SubnetIds (please update the SubnetIds to the private subnet IDs of the VPC created earlier)

    • ContentSyncJobProcessQueue (update the AWSACCOUNTID value to your AWSACCOUNTID)

  2. Open appconfig-prod.yml available at S3Content/config/ymls

    • CS_JOB_QUEUE (Update the AWSACCOUNTID value to your AWSACCOUNTID

  3. Upload these files in S3 bucket content-sync-configuration-$subdomain

Create a CodeBuild project

  1. Go to CodeBuild Console

  2. Click Create build project.

  3. Setup a Project name and Description in section Project Configuration.

  4. Select the source provider to S3.

  5. Choose the bucket content-sync-configuration-$subdomain created in previous steps.

  6. Select s3 object key or folder type “serverless/”

  7. Note: This folder on S3 used for stack.

  8. Select operating system Ubuntu as Environment Section

  9. Set the Runtime (s) to standard. Set image to: aws/codebuild/standard:2.0. Image version to Always use the latest image for this runtime version. And environment type to Linux.

  10. Leave the privileges section unchecked.

  11. Select New service role and set the role name to contentsync_role.

  12. Click on Additional Configuration.

  13. Set VPC to created in the previous steps.

  14. Set the Subnets to Private Subnet #1 and Private Subnet #2.

  15. Set the Security Group to the Lambda Security Group created at Step #3.

  16. Under BuildSpec, select insert build commands and insert the build spec file shared at S3Content/serverless/buildspec.yml.

Note: Update --region if it's other than us-east-1.

17. On the buildspec file there are some pre-configured commands that don’t need changes:

  • The runtime version. Installing serverless. Deploying the Stack to the desired AWS region.

18. No change to artifacts.

19. Enable CloudWatch logs option.

20. Set The group name and stream name to blank.

21. Click create build project.

Create the IAM Role

Name the Lambda role: content_sync_lambda_service_node

Provide following permissions to Lamda role:

  • AmazonEC2FullAccess

  • SecretsManagerReadWrite

  • AmazonSQSFullAccess

  • AmazonS3FullAccess

  • AmazonDynamoDBFullAccess

  • AmazonSESFullAccess

  • AmazonVPCFullAccess

  • AmazonSNSFullAccess

  • ContentSyncLambdaBasicExecution(Managed Policy)

{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:*:*:*" } ] }
  • KMSWriteLambda(Managed Policy)

{ "Version": "2012-10-17", "Statement": [ {​​​​​ "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "kms:DescribeCustomKeyStores", "kms:Decrypt", "kms:Encrypt", "kms:GenerateDataKey", "kms:ReEncryptTo", "kms:GenerateDataKeyWithoutPlaintext", "kms:DescribeKey", "kms:GenerateDataKeyPairWithoutPlaintext", "kms:GenerateDataKeyPair", "kms:ReEncryptFrom" ], "Resource": "*" }​​​​​ ]

Edit the IAM Role

On the IAM console go to Roles.

Click contentsync_role.

Under Permissions of role, attach the AdministratorAccess Policy.

Note: This policy is managed by AWS and permission is given to the CodeBuild service alone. It is required to create and update multiple services such as CloudFormation stack, Lambda function, DynamoDB table, etc.

Deploy the Stack

  1. Go to CodeBuild service console.

  2. Click the created Content Sync project under Build Projects.

  3. Click Start Build.

The data under Start Build section is predefined and do not be overwritten.

Lambda Validation

  1. Go to Lambda console 

  2. There will be 9 lambda functions deployed for content sync and ETL (Content Mover): 

  • contentsync-**-contentsync-worker-v1 

  • contentsync-**-contentsync-api-v1 

  • contentsync-**-contentsync-manager-v1 

  • contentsync-**-contentsync-etl-worker-v1

  • contentsync-**-contentsync-etl-mapping-manager-v1

  • contentsync-**-contentsync-etl-renewal-manager-v1

  • contentsync-**-contentsync-etl-renewal-worker-v1

  • contentsync-**-contentsync-etl-mapping-worker-v1

  • contentsync-**-contentsync-systemauth-reload-v1

     

For each of the 9 lambdas above,

  • select the Code source section,

  • click on the upload from drop-down button → Amazon S3 location.

  • Supply the Object URL of the jar file uploaded to the lambda-functions-code folder within the content-sync-configuration-[tenantname] bucket.

For latest content sync code go to “Prosperoware-cam.contentsync.apis-1.0” on Jenkins and check the lastest build that we deployed in prod. If build number was #130 then go to this S3

cam.retention.prosperoware.io.jenkins.lambda Folder: contentsync Folder: jenkins-Prosperowaredev.io-cam.contentsync.apis-1.0-{LatestBuild} Download: content-sync-aws-1.0-SNAPSHOT.jar

 

 

Complete the Function Upload

Click Upload under function package. Upload the jar shared at S3Content\lambda-functions-code folder, and click save on top right corner.

Stack Updates

Upgrades to the stack consist of these steps:

Create S3 Buckets

Create 2 buckets manually:

content-sync-configuration-$subdomain - This bucket will use to share the AWS setup script to the user.

$subdomain-prosperoware-io-encrypted-bucket - This bucket will use to store the client's content. Set default encryption to AES-256

Note: $subdomain Specify your subdomain name.

Upload the shared content to the S3 bucket

Update the below 2 files

  1. Open params-prod.yml available at S3Content/serverless/params/us-east-1

    • LambdaRole (update the AWSACCOUNTID to your AWSACCOUNTID)

    • SecurityGroup (Lambda-SG created in the previous step)

    • SubnetIds (please update the SubnetIds to the private subnet IDs of the VPC created earlier)

    • ContentSyncJobProcessQueue (update the AWSACCOUNTID value to your AWSACCOUNTID)

  2. Open appconfig-prod.yml available at S3Content/config/ymls

    • CS_JOB_QUEUE (Update the AWSACCOUNTID value to your AWSACCOUNTID

  3. Upload these files in S3 bucket content-sync-configuration-$subdomain

Complete the Function Upload

Click Upload under the function package. Upload the jar shared at S3Content\lambda-functions-code folder, and click save on the top right corner.

AWS at Client Security Configuration and Validation

Security Configuration and Validation

AWS operates under a shared responsibility model. AWS takes care of security ‘of’ the cloud while we (AWS customers) are responsible for security ‘in’ the cloud. Let's validate the security for each AWS service used.

Identity and Access Management (IAM)

  1. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

  2. Do not access AWS with root administrator instead encourage to use user defined in IAM

SQS Queues

The queues use the default encryption provided by AWS.

Note: Before pushing jobs to the SQS queue (CaseCreator), enable the debug log. It would help you to troubleshoot the failed jobs.

S3 - Simple Storage Service

  1. Default encryption – The second bucket is where the content will be stored, is encrypted with AES-256. This helps securely stores the files residing in the bucket.

  2. Object lock – Prevents certain objects from being deleted.

  3. Bucket policy, ACLs, and CORS - Not be accessible to the open web, or even from other accounts within your AWS Organisation. To protect sensitive data, permissions can be set by configuring.

VPC

  • One VPC per your environment with 2 private subnets and 2 public subnets. The private subnets will be used when setting up the deployed resources so there’s no external access to the AWS resources provisioned.

Lambda Security Group

  1. The Lambda security group is created inside the VPC so there’s no outside access allowed to the resources using the security group.

  2. The inbound and outbound rules will allow access to all the services in the VPC.

  3. Create lambda role - The lambda role will be created to share access between Lambda and other resources; and other resources between themselves.

  4. Custom policies allows the lambda to create and write logs into CloudWatch

DynamoDB

  1. Data is redundantly stored on multiple devices across multiple facilities in an Amazon DynamoDB Region

  2. All user data stored in Amazon DynamoDB is fully encrypted at rest, it encrypts your data using 256-bit Advanced Encryption Standard (AES-256),

  3. DynamoDB encryption at rest provides an additional layer of data protection by securing your data in an encrypted table—including its primary key, local and global secondary indexes, backups.

  4. Use IAM roles to authenticate access to DynamoDB

    Use IAM policies for DynamoDB base authorization

Set up security monitoring

  1. API Gateway

    1. This AWS Service is for creating, publishing, maintaining, monitoring and securing APIs.

    2. In CAM we use REST APIs integrated with Lambda functions endpoints or other AWS Services to restrict unauthorized access.

    3. The data which is in transit is encrypted by default through API Gateway

  2. CloudTrails

    1. AWS Cloudtrail is another service we use as an integrated service with API Gateway

    2. Cloudtrail is a service that provides a record of events or actions taken by a user, role or AWS Service in API Gateway.

    3. Cloudtrail captures all REST API calls. This helps us strengthen security on API Gateway by auditing and monitoring internally the REST API calls being made from the API Gateway console and also code calls to the API Gateway service APIs.

  3. GuardDuty

    1. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in AWS.

    2. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs.

    3. We have enabled Amazon GuardDuty in all AWS Regions (in those which are not using either) as recommended by AWS for better security protection.

    4. The following list contains some types of attacks that GuardDuty can detect:

      • Denial Of Service

      • Brute Force

      • Tor Client

      • Attacks from different OS (Kali Linux, Parrot Linux, Pentoo Linux)

      • Phishing Domain Request

Elastic IP Configuration for client stack

  • Once you deployed the client stack on your AWS account you follow the following procedure for configuring the client stack and iManage region where you provisioned your AWS client Stack.

Whitelisting for AWS

For more information on iManage region whitelist IP’s click here

  • Navigate to the AWS account

  • Navigate to the VPC service, opens the following screen -

  • Navigate to the NAT Gateways, view the following screen -

  • The elastic IP addresses attached to each NAT Gateway

  • The attached IP’s should whitelist against your iManage server for the client stack

Configure Content Mover (ETL) within CAM

Click here for details how to proceed updating the Content Mover sync in CAM using the new stack.

Updating the Stack MySQL

CAM’s AWS RDS is being upgraded to 8.4 for the MySQL db. AWS has provided this documentation to follow from your AWS portal for CAM.

  1. No stack upgrade is necessary first here.

  2. Amazon supports one of the following scenarios for the upgrade:

    1. MySQL 5.7 to MySQL 8.0

    2. MySQL 8.0 to MySQL 8.4

  3. MySQL major version upgrades typically complete in about 10 minutes. Some upgrades might take longer because of the DB instance class size or because the instance doesn't follow certain operational guidelines in Best practices for Amazon RDS. If you upgrade a DB instance from the Amazon RDS console, the status of the DB instance indicates when the upgrade is complete.

  4. Note: If a failure occurs, AWS states the upgrade is automatically rolled back to the prior version by AWS. When an upgrade fails and is rolled back, Amazon RDS generates an event with the event ID RDS-EVENT-0188

  5. First run through the prechecks AWS gives based on your MySQL version :

    1. Prechecks for upgrades from MySQL 8.0 to 8.4

    2. Prechecks for upgrades from MySQL 5.7 to 8.0

  6. If you choose to test the upgrade, you can make a snapshot and follow the AWS steps to upgrade a snapshot: Upgrading a MySQL DB snapshot engine version - Amazon Relational Database Service

  7. Log in to the AWS Management Console

    Go to the AWS Management Console and log in using your AWS account. Once logged in, type “RDS” in the service search bar and select Amazon RDS from the list.

  8. Select the MySQL Instance to Update

    In the Amazon RDS console, you’ll see a list of existing database instances. Choose the MySQL instance that you want to upgrade.

  9. Choose to Change Engine Version

    Go back to the instance details page:
    Click on Modify.
    Find the DB Engine Version section.

  10. Select the Desired MySQL 8.x Version

    In the DB Engine Version section, choose the appropriate MySQL 8.x version from the list of available versions.

  11. Check Other Settings if Needed

    Check other settings such as Multi-AZ Deployment, Storage, and Backup to ensure they are still appropriate after the upgrade.
    Select Continue to proceed with the changes.

  12. Apply Changes

    Scroll to the bottom of the page, and you’ll see the Scheduling of Modifications option:

    • Apply Immediately: Apply changes immediately (may cause service interruption).

    • Apply During the Next Maintenance Window: Apply changes during the next maintenance window (less disruptive).

    The image shows Apply Immediately selected to implement the changes immediately.
    Select Modify DB instance.

Download A Document with Business Continuity

  1. Login as the BC user.

  2. Go to Documents tab.

  3. Search for a document.

Note: Database Search needs to be enabled to download from Business Continuity.

4. Open the resulting document. Click the download icon in the details.

Note: There is only the option to download 1 document at a time.

Related content

Let's Connect📌

☎ +1 630.598.1100
☎ ‪+44 20 3880 1550‬
📧 support@litera.com
💻 https://www.litera.com/support/

📝 Support is available:
4 am - 8 pm US Eastern
(9 am - 1 am GMT/BST
7 pm - 11 am AET) on normal business days (excluding holidays)

© 2024 Litera