AWS Certified DevOps Engineer Professional (DOP-C02)

The AWS Certified DevOps Engineer Professional (DOP-C02) were last updated on today.
  • Viewing page 2 out of 136 pages.
  • Viewing questions 6-10 out of 680 questions
Disclaimers:
  • - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.and Azure
  • - Trademarks, certification & product names are used for reference only and belong to Amazon.and Azure

Topic 1 - Exam A

Question #6 Topic 1

A company has 20 service teams. Each service team is responsible for its own microservice. Each service team uses a separate AWS account for its microservice and a VPC with the 192.168.0.0/22 CIDR block. The company manages the AWS accounts with AWS Organizations. Each service team hosts its microservice on multiple Amazon EC2 instances behind an Application Load Balancer. The microservices communicate with each other across the public internet. The company’s security team has issued a new guideline that all communication between microservices must use HTTPS over private network connections and cannot traverse the public internet. A DevOps engineer must implement a solution that fulfills these obligations and minimizes the number of changes for each service team. Which solution will meet these requirements?

  • A Create a new AWS account in AWS Organizations. Create a VPC in this account, and use AWS Resource Access Manager to share the private subnets of this VPC with the organization. Instruct the service teams to launch a new Network Load Balancer (NLB) and EC2 instances that use the shared private subnets. Use the NLB DNS names for communication between microservices.
  • B Create a Network Load Balancer (NLB) in each of the microservice VPCs. Use AWS PrivateLink to create VPC endpoints in each AWS account for the NLBs. Create subscriptions to each VPC endpoint in each of the other AWS accounts. Use the VPC endpoint DNS names for communication between microservices.
  • C Create a Network Load Balancer (NLB) in each of the microservice VPCs. Create VPC peering connections between each of the microservice VPCs. Update the route tables for each VPC to use the peering links. Use the NLB DNS names for communication between microservices.
  • D Create a new AWS account in AWS Organizations. Create a transit gateway in this account, and use AWS Resource Access Manager to share the transit gateway with the organization. In each of the microservice VPCs, create a transit gateway attachment to the shared transit gateway. Update the route tables of each VPC to use the transit gateway. Create a Network Load Balancer (NLB) in each of the microservice VPCs. Use the NLB DNS names for communication between microservices.
Suggested Answer: D
NOTE: Option D is the correct solution because it allows the service teams to create a new AWS account in AWS Organizations and share a transit gateway with the organization. Each microservice VPC can then create a transit gateway attachment to the shared transit gateway, allowing for private network communication between microservices. Additionally, creating a Network Load Balancer (NLB) in each microservice VPC will enable load balancing and use the NLB DNS names for communication.
Question #7 Topic 1

A developer has written an application that writes data to Amazon DynamoDB. The DynamoDB table has been configured to use conditional writes. During peak usage times, writes are failing due to a ConditionalCheckFailedException error. How can the developer increase the application's reliability when multiple clients are attempting to write to the same record?

  • A Write the data to an Amazon SNS topic.
  • B Increase the amount of write capacity for the table to anticipate short-term spikes or bursts in write operations.
  • C Implement a caching solution, such as DynamoDB Accelerator or Amazon ElastiCache.
  • D Implement error retries and exponential backoff with jitter.
Suggested Answer: D
NOTE: The developer can increase the application's reliability by implementing error retries and exponential backoff with jitter. This means that when a write operation fails due to a ConditionalCheckFailedException error, the application will automatically retry the operation after a certain period of time, and the time between each retry will increase exponentially. Adding jitter to the backoff strategy helps to avoid a situation where multiple clients retry at the same time, which can cause contention and increase the likelihood of further failures.
Question #8 Topic 1

A company uses Amazon S3 to store proprietary information. The development team creates buckets for new projects on a daily basis. The security team wants to ensure that all existing and future buckets have encryption, logging, and versioning enabled. Additionally, no buckets should ever be publicly read or write accessible. What should a DevOps engineer do to meet these requirements?

  • A Enable AWS CloudTrail and configure automatic remediation using AWS Lambda.
  • B Enable AWS Config rules and configure automatic remediation using AWS Systems Manager documents.
  • C Enable AWS Trusted Advisor and configure automatic remediation using Amazon EventBridge.
  • D Enable AWS Systems Manager and configure automatic remediation using Systems Manager documents.
Suggested Answer: B
NOTE: The DevOps engineer should enable AWS Config rules and configure automatic remediation using AWS Systems Manager documents. AWS Config can be used to monitor the encryption, logging, versioning, and public access settings of S3 buckets. By configuring AWS Systems Manager documents, the engineer can automate the remediation of any non-compliant buckets.
Question #9 Topic 1

A company has containerized all of its in-house quality control applications. The company is running Jenkins on Amazon EC2 instances, which require patching and upgrading. The compliance officer has requested a DevOps engineer begin encrypting build artifacts since they contain company intellectual property. What should the DevOps engineer do to accomplish this in the MOST maintainable manner?

  • A Automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default.
  • B Deploy Jenkins to an Amazon ECS cluster and copy build artifacts to an Amazon S3 bucket with default encryption enabled.
  • C Leverage AWS CodePipeline with a build action and encrypt the artifacts using AWS Secrets Manager.
  • D Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on EC2 instances.
Suggested Answer: A
NOTE: The most maintainable approach would be to automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default. This ensures that the Jenkins instance is consistently up-to-date and secure.
Question #10 Topic 1

An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance. When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region. How should the company meet these requirements with the LEAST amount of application changes?

  • A Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.
  • B Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.
  • C Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.
  • D Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.
Suggested Answer: B
NOTE: Using Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases would require the least amount of application changes. With global tables, the product catalog can be replicated across regions, ensuring a single catalog. Regional tables can be used to store customer information and purchases separately in each region for compliance purposes.