AWS Certified Developer Associate (DVA-C02)

The AWS Certified Developer Associate (DVA-C02) were last updated on today.
  • Viewing page 9 out of 215 pages.
  • Viewing questions 41-45 out of 1,075 questions
Disclaimers:
  • - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.and Azure
  • - Trademarks, certification & product names are used for reference only and belong to Amazon.and Azure

Topic 1 - Exam A

Question #41 Topic 1

A developer is deploying a new application to Amazon Elastic Container Service (Amazon ECS). The developer needs to securely store and retrieve different types of variables. These variables include authentication information for a remote API, the URL for the API, and credentials. The authentication information and API URL must be available to all current and future deployed versions of the application across development, testing, and production environments. How should the developer retrieve the variables with the FEWEST application changes?

  • A Update the application to retrieve the variables from AWS Systems Manager Parameter Store. Use unique paths in Parameter Store for each variable in each environment. Store the credentials in AWS Secrets Manager in each environment.
  • B Update the application to retrieve the variables from AWS Key Management Service (AWS KMS). Store the API URL and credentials as unique keys for each environment.
  • C Update the application to retrieve the variables from an encrypted file that is stored with the application. Store the API URL and credentials in unique files for each environment.
  • D Update the application to retrieve the variables from each of the deployed environments. Define the authentication information and API URL in the ECS task definition as unique names during the deployment process.
Suggested Answer: A
NOTE: Answer is :A
Explanation :AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data and secrets. It can store values such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values. AWS Secrets Manager protects access to applications, services, and IT resources. This eliminates the upfront and ongoing expense of operating your own infrastructure for managing secrets.
Question #42 Topic 1

A company stores its data in data tables in a series of Amazon S3 buckets. The company received an alert that customer credit card information might have been exposed in a data table on one of the company's public applications. A developer needs to identify all potential exposures within the application environment. Which solution will meet these requirements?

  • A Use Amazon Athena to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Personal finding type.
  • B Use Amazon Macie to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Financial finding type.
  • C Use Amazon Macie to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Personal finding type.
  • D Use Amazon Athena to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Financial finding type.
Suggested Answer: B
NOTE: Answer is :B
Explanation :Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data like Personal Identifiable Information (PII). Macie recognizes sensitive data such as personally identifiable information (PII) or needs special handling to meet your compliance requirements. Hence, as the data handling is related to credit card information which comes under financial data, use Amazon Macie and filter the findings by using the SensitiveData:S3Object/Financial finding type.
Question #43 Topic 1

An ecommerce website uses an AWS Lambda function and an Amazon RDS for MySQL database for an order fulfillment service. The service needs to return order confirmation immediately. During a marketing campaign that caused an increase in the number of orders, the website's operations team noticed errors for “too many connections” from Amazon RDS. However, the RDS DB cluster metrics are healthy. CPU and memory capacity are still available. What should a developer do to resolve the errors?

  • A Initialize the database connection outside the handler function. Increase the max_user_connections value on the parameter group of the DB cluster. Restart the DB cluster.
  • B Initialize the database connection outside the handler function. Use RDS Proxy instead of connecting directly to the DB cluster.
  • C Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the orders. Ingest the orders into the database. Set the Lambda function's concurrency to a value that equals the number of available database connections.
  • D Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the orders. Ingest the orders into the database. Set the Lambda function's concurrency to a value that is less than the number of available database connections.
Suggested Answer: B
NOTE: Answer is :B
Explanation :The best solution is to reuse database connections between function invocations, which can be achieved by initializing the database connection outside the handler function and using Amazon RDS Proxy. Amazon RDS Proxy allows applications to pool and share connections established with the database, which helps resolve the 'too many connections' issue.
Question #44 Topic 1

A developer is working on an ecommerce website. The developer wants to review server logs without logging in to each of the application servers individually. The website runs on multiple Amazon EC2 instances, is written in Python, and needs to be highly available. How can the developer update the application to meet these requirements with MINIMUM changes?

  • A Rewrite the application to be cloud native and to run on AWS Lambda, where the logs can be reviewed in Amazon CloudWatch.
  • B Set up centralized logging by using Amazon OpenSearch Service, Logstash, and OpenSearch Dashboards.
  • C Scale down the application to one larger EC2 instance where only one instance is recording logs.
  • D Install the unified Amazon CloudWatch agent on the EC2 instances. Configure the agent to push the application logs to CloudWatch.
Suggested Answer: D
NOTE: Answer is :D
Explanation :The developer can achieve central log management without the need to extensively modify the application or infrastructure by installing the Unified Amazon CloudWatch agent on the EC2 instances to push the application logs to CloudWatch directly.
Question #45 Topic 1

A company needs to harden its container images before the images are in a running state. The company's application uses Amazon Elastic Container Registry (Amazon ECR) as an image registry. Amazon Elastic Kubernetes Service (Amazon EKS) for compute, and an AWS CodePipeline pipeline that orchestrates a continuous integration and continuous delivery (CI/CD) workflow. Dynamic application security testing occurs in the final stage of the pipeline after a new image is deployed to a development namespace in the EKS cluster. A developer needs to place an analysis stage before this deployment to analyze the container image earlier in the CI/CD pipeline. Which solution will meet these requirements with the MOST operational efficiency?

  • A Build the container image and run the docker scan command locally. Mitigate any findings before pushing changes to the source code repository. Write a pre-commit hook that enforces the use of this workflow before commit.
  • B Create a new CodePipeline stage that occurs after the container image is built. Configure ECR basic image scanning to scan on image push. Use an AWS Lambda function as the action provider. Configure the Lambda function to check the scan results and to fail the pipeline if there are findings.
  • C Create a new CodePipeline stage that occurs after source code has been retrieved from its repository. Run a security scanner on the latest revision of the source code. Fail the pipeline if there are findings.
  • D Add an action to the deployment stage of the pipeline so that the action occurs before the deployment to the EKS cluster. Configure ECR basic image scanning to scan on image push. Use an AWS Lambda function as the action provider. Configure the Lambda function to check the scan results and to fail the pipeline if there are findings.
Suggested Answer: B
NOTE: Answer is :B
Explanation :Creating a new CodePipeline stage that occurs after the container image is built and using ECR basic image scanning to scan on image push effectively meets the requirement of hardening the container images before they are in a running state. By using an AWS Lambda function as the action provider which checks the scan results and fails the pipeline if there are findings, the operational efficiency is increased.