AWS Certified Solutions Architect Professional (SAP C02)

The AWS Certified Solutions Architect Professional (SAP C02) were last updated on today.
  • Viewing page 8 out of 270 pages.
  • Viewing questions 36-40 out of 1,350 questions
Disclaimers:
  • - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.and Azure
  • - Trademarks, certification & product names are used for reference only and belong to Amazon.and Azure

Topic 1 - Exam A

Question #36 Topic 1

A company is building a serverless application that runs on an AWS Lambda function that is attached to a VPC. The company needs to integrate the application with a new service from an external provider. The external provider supports only requests that come from public IPv4 addresses that are in an allow list. The company must provide a single public IP address to the external provider before the application can start using the new service. Which solution will give the application the ability to access the new service?

  • A Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway. Configure the VPC to use the NAT gateway.
  • B Deploy an egress-only internet gateway. Associate an Elastic IP address with the egress-only internet gateway. Configure the elastic network interface on the Lambda function to use the egress-only internet gateway.
  • C Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the Lambda function to use the internet gateway.
  • D Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the default route in the public VPC route table to use the internet gateway.
Suggested Answer: A
NOTE: Answer is :A
Explanation :Lambda functions inside a VPC don't have direct access to the internet. A NAT gateway could give them internet access and the NAT gateway would have a stable and predictable public IP address, which is what the external service requires.
Question #37 Topic 1

A company hosts a Git repository in an on-premises data center. The company uses webhooks to invoke functionality that runs in the AWS Cloud. The company hosts the webhook logic on a set of Amazon EC2 instances in an Auto Scaling group that the company set as a target for an Application Load Balancer (ALB). The Git server calls the ALB for the configured webhooks. The company wants to move the solution to a serverless architecture. Which solution will meet these requirements with the LEAST operational overhead?

  • A For each webhook, create and configure an AWS Lambda function URL. Update the Git servers to call the individual Lambda function URLs.
  • B Create an Amazon API Gateway HTTP API. Implement each webhook logic in a separate AWS Lambda function. Update the Git servers to call the API Gateway endpoint.
  • C Deploy the webhook logic to AWS App Runner. Create an ALB, and set App Runner as the target. Update the Git servers to call the ALB endpoint.
  • D Containerize the webhook logic. Create an Amazon Elastic Container Service (Amazon ECS) cluster, and run the webhook logic in AWS Fargate. Create an Amazon API Gateway REST API, and set Fargate as the target. Update the Git servers to call the API Gateway endpoint.
Suggested Answer: B
NOTE: Answer is :B
Explanation :The requirements are to move to a serverless architecture with the LEAST operational overhead. So, B would be the ideal choice as it uses AWS Lambda functions for webhook logic and exposes these through API Gateway. Age Gateway HTTP API is a low-cost, efficient way to build APIs and would thus incur the least operational overhead.
Question #38 Topic 1

A company is using Amazon OpenSearch Service to analyze data. The company loads data into an OpenSearch Service cluster with 10 data nodes from an Amazon S3 bucket that uses S3 Standard storage. The data resides in the cluster for 1 month for read-only analysis. After 1 month, the company deletes the index that contains the data from the cluster. For compliance purposes, the company must retain a copy of all input data. The company is concerned about ongoing costs and asks a solutions architect to recommend a new solution. Which solution will meet these requirements MOST cost-effectively?

  • A Replace all the data nodes with UltraWarm nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.
  • B Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy.
  • C Reduce the number of data nodes in the cluster to 2. Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Add cold storage nodes to the cluster Transition the indexes from UltraWarm to cold storage. Delete the input data from the S3 bucket after 1 month by using an S3 Lifecycle policy.
  • D Reduce the number of data nodes in the cluster to 2. Add instance-backed data nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.
Suggested Answer: B
NOTE: Answer is :B
Explanation :This option balances between reducing the ongoing costs and maintaining the data compliance. By reducing the number of data nodes in the cluster to 2, it reduces the cost of running the cluster. Adding UltraWarm nodes helps in handling the capacity since they are optimized for large volumes of data and cost less compared to regular storage. Transitioning the indexes to UltraWarm when OpenSearch Service ingests the data provides an efficient way to read-only data. Lastly, moving the input data to S3 Glacier Deep Archive after 1 month using an S3 Lifecycle policy ensures data compliance while keeping costs low because S3 Glacier Deep Archive is a low-cost storage service that allows data retrieval within 12 hours or more.
Question #39 Topic 1

A company has multiple business units that each have separate accounts on AWS. Each business unit manages its own network with several VPCs that have CIDR ranges that overlap. The company’s marketing team has created a new internal application and wants to make the application accessible to all the other business units. The solution must use private IP addresses only. Which solution will meet these requirements with the LEAST operational overhead?

  • A Instruct each business unit to add a unique secondary CIDR range to the business unit's VPC. Peer the VPCs and use a private NAT gateway in the secondary range to route traffic to the marketing team.
  • B Create an Amazon EC2 instance to serve as a virtual appliance in the marketing account's VPC. Create an AWS Site-to-Site VPN connection between the marketing team and each business unit's VPC. Perform NAT where necessary.
  • C Create an AWS PrivateLink endpoint service to share the marketing application. Grant permission to specific AWS accounts to connect to the service. Create interface VPC endpoints in other accounts to access the application by using private IP addresses.
  • D Create a Network Load Balancer (NLB) in front of the marketing application in a private subnet. Create an API Gateway API. Use the Amazon API Gateway private integration to connect the API to the NLB. Activate IAM authorization for the API. Grant access to the accounts of the other business units.
Suggested Answer: C
NOTE: Answer is :C
Explanation :Option C is the best solution as AWS PrivateLink enables you to privately access services across different accounts and VPCs, reducing the operational overhead of managing overlapping CIDR ranges and NAT gateways. It provides secure, private connectivity between VPCs and applications running in AWS without exposing data to the public internet. It also allows for easily scalable solutions and reduces operational overhead.
Question #40 Topic 1

A company used Amazon EC2 instances to deploy a web fleet to host a blog site. The EC2 instances are behind an Application Load Balancer (ALB) and are configured in an Auto Scaling group. The web application stores all blog content on an Amazon EFS volume. The company recently added a feature for bloggers to add video to their posts, attracting 10 times the previous user traffic. At peak times of day, users report buffering and timeout issues while attempting to reach the site or watch videos. Which is the MOST cost-efficient and scalable deployment that will resolve the issues for users?

  • A Reconfigure Amazon EFS to enable maximum I/O.
  • B Update the blog site to use instance store volumes for storage. Copy the site contents to the volumes at launch and to Amazon S3 at shutdown.
  • C Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
  • D Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.
Suggested Answer: C
NOTE: Answer is :C
Explanation :Configuring an Amazon CloudFront distribution with an S3 bucket would be the most cost-effective and scalable solution. CloudFront is designed for high data transfer speeds which is beneficial for video streaming, and it is well-integrated with Amazon S3 for storing the videos. Migrating videos from expensive EFS storage to cheaper S3 storage would reduce costs.