AWS Certified DevOps Engineer Professional (DOP-C02)

The AWS Certified DevOps Engineer Professional (DOP-C02) were last updated on today.
  • Viewing page 7 out of 136 pages.
  • Viewing questions 31-35 out of 680 questions
Disclaimers:
  • - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.and Azure
  • - Trademarks, certification & product names are used for reference only and belong to Amazon.and Azure

Topic 1 - Exam A

Question #31 Topic 1

A space exploration company receives telemetry data from multiple satellites. Small packets of data are received through Amazon API Gateway and are placed directly into an Amazon Simple Queue Service (Amazon SQS) standard queue. A custom application is subscribed to the queue and transforms the data into a standard format. Because of inconsistencies in the data that the satellites produce, the application is occasionally unable to transform the data. In these cases, the messages remain in the SQS queue. A DevOps engineer must develop a solution that retains the failed messages and makes them available to scientists for review and future processing. Which solution will meet these requirements?

  • A Configure AWS Lambda to poll the SQS queue and invoke a Lambda function to check whether the queue messages are valid. If validation fails, send a copy of the data that is not valid to an Amazon S3 bucket so that the scientists can review and correct the data. When the data is corrected, amend the message in the SQS queue by using a replay Lambda function with the corrected data.
  • B Convert the SQS standard queue to an SQS FIFO queue. Configure AWS Lambda to poll the SQS queue every 10 minutes by using an Amazon EventBridge schedule. Invoke the Lambda function to identify any messages with a SentTimestamp value that is older than 5 minutes, push the data to the same location as the application's output location, and remove the messages from the queue.
  • C Create an SQS dead-letter queue. Modify the existing queue by including a redrive policy that sets the Maximum Receives setting to 1 and sets the dead-letter queue ARN to the ARN of the newly created queue. Instruct the scientists to use the dead-letter queue to review the data that is not valid. Reprocess this data at a later time.
  • D Configure API Gateway to send messages to different SQS virtual queues that are named for each of the satellites. Update the application to use a new virtual queue for any data that it cannot transform, and send the message to the new virtual queue. Instruct the scientists to use the virtual queue to review the data that is not valid. Reprocess this data at a later time.
Suggested Answer: C
NOTE: The solution that will meet the requirements is option C. By creating an SQS dead-letter queue and modifying the existing queue with a redrive policy, the failed messages can be sent to the dead-letter queue for review by the scientists. The Maximum Receives setting is set to 1, which means that if the application is unable to transform the data, the message will be automatically moved to the dead-letter queue after one unsuccessful attempt. The scientists can then reprocess this data at a later time.
Question #32 Topic 1

A company has multiple member accounts that are part of an organization in AWS Organizations. The security team needs to review every Amazon EC2 security group and their inbound and outbound rules. The security team wants to programmatically retrieve this information from the member accounts using an AWS Lambda function in the management account of the organization. Which combination of access changes will meet these requirements? (Choose three.)

  • A Create a trust relationship that allows users in the member accounts to assume the management account IAM role.
  • B Create a trust relationship that allows users in the management account to assume the IAM roles of the member accounts.
  • C Create an IAM role in each member account that has access to the AmazonEC2ReadOnlyAccess managed policy.
  • D Create an I AM role in each member account to allow the sts:AssumeRole action against the management account IAM role's ARN.
  • E Create an I AM role in the management account that allows the sts:AssumeRole action against the member account IAM role's ARN.
  • F Create an IAM role in the management account that has access to the AmazonEC2ReadOnlyAccess managed policy.
Suggested Answer: ACE
NOTE: To meet the requirements, the security team should choose options A, C, and E. Option A allows users in the member accounts to assume the management account IAM role, which is necessary to retrieve the information programmatically. Option C creates an IAM role in each member account that has access to the AmazonEC2ReadOnlyAccess managed policy, allowing them to retrieve EC2 security group information. Option E creates an IAM role in the management account that allows the sts:AssumeRole action against the member account IAM role's ARN, which is required for cross-account access.
Question #33 Topic 1

A company uses AWS Storage Gateway in file gateway mode in front of an Amazon S3 bucket that is used by multiple resources. In the morning when business begins, users do not see the objects processed by a third party the previous evening. When a DevOps engineer looks directly at the S3 bucket, the data is there, but it is missing in Storage Gateway. Which solution ensures that all the updated third-party files are available in the morning?

  • A Configure a nightly Amazon EventBridge event to invoke an AWS Lambda function to run the RefreshCache command for Storage Gateway.
  • B Instruct the third party to put data into the S3 bucket using AWS Transfer for SFTP.
  • C Modify Storage Gateway to run in volume gateway mode.
  • D Use S3 Same-Region Replication to replicate any changes made directly in the S3 bucket to Storage Gateway.
Suggested Answer: A
NOTE: The solution to ensure that all the updated third-party files are available in the morning is to configure a nightly Amazon EventBridge event to invoke an AWS Lambda function to run the RefreshCache command for Storage Gateway. This will refresh the cache and make the objects processed by the third party available.
Question #34 Topic 1

A company is implementing an Amazon Elastic Container Service (Amazon ECS) cluster to run its workload. The company architecture will run multiple ECS services on the cluster. The architecture includes an Application Load Balancer on the front end and uses multiple target groups to route traffic. A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis. Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.)

  • A Download the Amazon CloudWatch Logs container instance from AWS. Configure this instance as a task. Update the application service definitions to include the logging task.
  • B Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs.
  • C Use Amazon EventBridge to schedule an AWS Lambda function that will run every 60 seconds and will run the Amazon CloudWatch Logs create-export-task command. Then point the output to the logging S3 bucket.
  • D Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket.
  • E Activate access logging on the target groups that the ECS services use. Then send the logs directly to the logging S3 bucket.
  • F Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose.
Suggested Answer: BDE
NOTE: The DevOps engineer must take the following steps: 1. Install the Amazon CloudWatch Logs agent on the ECS instances (Option B). This will allow for collecting application and access logs from the ECS instances. 2. Activate access logging on the ALB (Option D). This will collect the access logs from the Application Load Balancer. 3. Activate access logging on the target groups that the ECS services use (Option E). This will collect the access logs from the target groups. These three steps together will allow the DevOps engineer to collect and send the logs to the logging S3 bucket for near-real-time analysis.
Question #35 Topic 1

A developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing. Occasionally, some application servers are being terminated after failing ELB HTTP health checks. The developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated. How can log collection be automated?

  • A Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  • B Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an AWS Config rule for EC2 Instance-terminate Lifecycle Action and trigger a step function that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  • C Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  • D Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon EventBridge rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
Suggested Answer: A
NOTE: The correct choice is A because it suggests using Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Then, creating an Amazon CloudWatch alarm for EC2 Instance Terminate Successful and triggering an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.