AWS Certified DevOps Engineer Professional (DOP-C02)

The AWS Certified DevOps Engineer Professional (DOP-C02) were last updated on today.
  • Viewing page 10 out of 136 pages.
  • Viewing questions 46-50 out of 680 questions
Disclaimers:
  • - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.and Azure
  • - Trademarks, certification & product names are used for reference only and belong to Amazon.and Azure

Topic 1 - Exam A

Question #46 Topic 1

A company is using AWS Organizations to centrally manage its AWS accounts. The company has turned on AWS Config in each member account by using AWS CloudFormation StackSets. The company has configured trusted access in Organizations for AWS Config and has configured a member account as a delegated administrator account for AWS Config. A DevOps engineer needs to implement a new security policy. The policy must require all current and future AWS member accounts to use a common baseline of AWS Config rules that contain remediation actions that are managed from a central account. Non-administrator users who can access member accounts must not be able to modify this common baseline of AWS Config rules that are deployed into each member account. Which solution will meet these requirements?

  • A Create a CloudFormation template that contains the AWS Config rules and remediation actions. Deploy the template from the Organizations management account by using CloudFormation StackSets.
  • B Create an AWS Config conformance pack that contains the AWS Config rules and remediation actions. Deploy the pack from the Organizations management account by using CloudFormation StackSets.
  • C Create a CloudFormation template that contains the AWS Config rules and remediation actions. Deploy the template from the delegated administrator account by using AWS Config.
  • D Create an AWS Config conformance pack that contains the AWS Config rules and remediation actions. Deploy the pack from the delegated administrator account by using AWS Config.
Suggested Answer: B
NOTE: Creating an AWS Config conformance pack allows for the deployment of a common baseline of AWS Config rules and remediation actions from the Organizations management account using CloudFormation StackSets. This ensures that all current and future member accounts use the same set of rules and actions. By leveraging CloudFormation StackSets, the conformance pack can be deployed centrally and managed by the organization.
Question #47 Topic 1

A DevOps engineer wants to deploy a serverless web application that is based on AWS Lambda. The deployment must meet the following requirements: Provide staging and production environments. Restrict developers from accessing the production environment. Avoid hardcoding passwords in the Lambda functions. Store source code in AWS CodeCommit. Use AWS CodePipeline to automate the deployment. What is the MOST operationally efficient solution that meets these requirements?

  • A Create separate staging and production accounts to segregate deployment targets. Use AWS Key Management Service (AWS KMS) to store environment- specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
  • B Create separate staging and production accounts to segregate deployment targets. Use Lambda environment variables to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
  • C Define tagging conventions for staging and production environments to segregate deployment targets. Use AWS Key Management Service (AWS KMS) to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
  • D Define tagging conventions for staging and production environments to segregate deployment targets. Use Lambda environment variables to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
Suggested Answer: A
NOTE: The most operationally efficient solution that meets the requirements is to create separate staging and production accounts to segregate deployment targets, use AWS Key Management Service (AWS KMS) to store environment-specific values, and use CodePipeline to automate deployments with AWS CodeDeploy. This solution provides a secure and scalable environment with segregation of environments and secure storage of environment-specific values.
Question #48 Topic 1

A company has an application that is using a MySQL-compatible Amazon Aurora Multi-AZ DB cluster as the database. A cross-Region read replica has been created for disaster recovery purposes. A DevOps engineer wants to automate the promotion of the replica so it becomes the primary database instance in the event of a failure. Which solution will accomplish this?

  • A Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoints. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to trigger an AWS Lambda function that will promote the replica instance as the master.
  • B Create an Aurora custom endpoint to point to the primary database instance. Configure the application to use this endpoint. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance.
  • C Create an AWS Lambda function to modify the application's AWS Cloud Formation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instance. Create an Amazon CloudWatch alarm to trigger this Lambda function after the failure event occurs.
  • D Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge (Amazon CloudWatch Events) event that defects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.
Suggested Answer: D
NOTE: chose option D because it provides a comprehensive solution for automating the promotion of the replica to the primary database instance in the event of a failure. Storing the Aurora endpoint in AWS Systems Manager Parameter Store ensures that it can be easily accessed and updated. Creating an Amazon EventBridge event allows for the detection of database failures, and running an AWS Lambda function to promote the replica and update the endpoint URL. Additionally, coding the application to reload the endpoint from Parameter Store in case of a database connection failure ensures uninterrupted connectivity.
Question #49 Topic 1

A company is building a new pipeline by using AWS CodePipeline and AWS CodeBuild in a build account. The pipeline consists of two stages. The first stage is a CodeBuild job to build and package an AWS Lambda function. The second stage consists of deployment actions that operate on two different AWS accounts: a development environment account and a production environment account. The deployment stages use the AWS CloudFormation action that CodePipeline invokes to deploy the infrastructure that the Lambda function requires. A DevOps engineer creates the CodePipeline pipeline and configures the pipeline to encrypt build artifacts by using the AWS Key Management Service (AWS KMS) AWS managed key for Amazon S3 (the aws/s3 key). The artifacts are stored in an S3 bucket. When the pipeline runs, the CloudFormation actions fail with an access denied error. Which combination of actions must the DevOps engineer perform to resolve this error? (Choose two.)

  • A Create an S3 bucket in each AWS account for the artifacts. Allow the pipeline to write to the S3 buckets. Create a CodePipeline S3 action to copy the artifacts to the S3 bucket in each AWS account. Update the CloudFormation actions to reference the artifacts S3 bucket in the production account.
  • B Create a customer managed KMS key. Configure the KMS key policy to allow the IAM roles used by the CloudFormation action to perform decrypt operations. Modify the pipeline to use the customer managed KMS key to encrypt artifacts.
  • C Create an AWS managed KMS key. Configure the KMS key policy to allow the development account and the production account to perform decrypt operations. Modify the pipeline to use the KMS key to encrypt artifacts.
  • D In the development account and in the production account, create an IAM role for CodePipeline. Configure the roles with permissions to perform CloudFormation operations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipeline account, configure the CodePipeline CloudFormation action to use the roles.
  • E In the development account and in the production account, create an IAM role for CodePipeline. Configure the roles with permissions to perform CloudFormation operations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipeline account, modify the artifacts S3 bucket policy to allow the roles access. Configure the CodePipeline CloudFormation action to use the roles.
Suggested Answer: CE
NOTE: The DevOps engineer should perform the following actions to resolve the error: 1. Create an AWS managed KMS key and configure the KMS key policy to allow the development account and the production account to perform decrypt operations. 2. Modify the pipeline to use the KMS key to encrypt artifacts. By creating an AWS managed KMS key and configuring the key policy, the DevOps engineer ensures that both the development account and the production account have the necessary permissions to decrypt the artifacts. Modifying the pipeline to use the KMS key for encryption ensures that the artifacts are encrypted using the correct key. In addition to these actions, the engineer should also perform the following: 1. In the development account and in the production account, create an IAM role for CodePipeline. 2. Configure the roles with permissions to perform CloudFormation operations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. 3. In the CodePipeline account, modify the artifacts S3 bucket policy to allow the roles access. 4. Configure the CodePipeline CloudFormation action to use the roles. These additional actions are necessary to ensure that the roles have the necessary permissions to perform CloudFormation operations and access the artifacts S3 bucket.
Question #50 Topic 1

A company is testing a web application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company uses a blue/green deployment process with immutable instances when deploying new software. During testing, users are being automatically logged out of the application at random times. Testers also report that, when a new version of the application is deployed, all users are logged out. The development team needs a solution to ensure users remain logged in across scaling events and application deployments. What is the MOST operationally efficient way to ensure users remain logged in?

  • A Enable smart sessions on the load balancer and modify the application to check for an existing session.
  • B Enable session sharing on the load balancer and modify the application to read from the session store.
  • C Store user session information in an Amazon S3 bucket and modify the application to read session information from the bucket.
  • D Modify the application to store user session information in an Amazon ElastiCache cluster.
Suggested Answer: B
NOTE: The most operationally efficient way to ensure users remain logged in across scaling events and application deployments is to enable session sharing on the load balancer and modify the application to read from the session store. This allows the session information to be shared across instances, ensuring that users remain logged in even when the application is scaled or a new version is deployed.