AWS Certified Solutions Architect Associate(SAA C03)

The AWS Certified Solutions Architect Associate(SAA C03) were last updated on today.
  • Viewing page 1 out of 198 pages.
  • Viewing questions 1-5 out of 990 questions
Disclaimers:
  • - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.and Azure
  • - Trademarks, certification & product names are used for reference only and belong to Amazon.and Azure

Topic 1 - Exam A

Question #1 Topic 1

A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks. Which solution meets these requirements?

  • A Enable Amazon GuardDuty on the account.
  • B Enable Amazon Inspector on the EC2 instances.
  • C Enable AWS Shield and assign Amazon Route 53 to it.
  • D Enable AWS Shield Advanced and assign the ELB to it.
Suggested Answer: D
NOTE: Answer is :D
Explanation :AWS Shield Advanced provides cost-effective protection for larger and more complex attacks. It can protect your AWS applications deployed on Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and more against DDoS attacks.
Question #2 Topic 1

A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis. The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days. What is the MOST operationally efficient solution that meets these requirements?

  • A Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
  • B Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
  • C Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Set up the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
  • D Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
Suggested Answer: A
NOTE: Answer is :A
Explanation :Amazon Kinesis Data Firehose is built to automatically scale to match the throughput of your data and requires no ongoing administration. It can capture, transform, and load data streams into AWS data stores. Amazon S3 provides simple storage service, you can transition to S3 Glacier for cost-effective long-term storage after 14 days as per the requirement.
Question #3 Topic 1

A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC. The EC2 instances run inside several subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 instances download images from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. The company is concerned about data transfer charges. What is the MOST cost-effective way for the company to avoid Regional data transfer charges?

  • A Launch the NAT gateway in each Availability Zone.
  • B Replace the NAT gateway with a NAT instance.
  • C Deploy a gateway VPC endpoint for Amazon S3.
  • D Provision an EC2 Dedicated Host to run the EC2 instances.
Suggested Answer: C
NOTE: Answer is :C
Explanation :Deploying a gateway VPC endpoint for Amazon S3 will allow the EC2 instances to directly access S3 without going through a NAT gateway, thus avoiding data transfer charges.
Question #4 Topic 1

A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users. Which solution meets these requirements?

  • A Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint.
  • B Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
  • C Order daily AWS Snowball devices. Load the data onto the Snowball devices and return the devices to AWS each day.
  • D Submit a support ticket through the AWS Management Console. Request the removal of S3 service limits from the account.
Suggested Answer: B
NOTE: Answer is :B
Explanation :AWS Direct Connect is a network service that provides an alternative to using the Internet to utilize AWS cloud services. By establishing a dedicated network connection from the on-premises network to AWS, we can bypass internet service providers in the network path which increases bandwidth throughput and provides a more consistent network experience when accessing AWS cloud services. Therefore, it allows for timely backups to S3 without impacting the internet connectivity for internal users.
Question #5 Topic 1

A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly available. Which combination of configuration options will meet these requirements? (Choose two.)

  • A Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.
  • B Configure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the private subnets.
  • C Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones. Deploy an RDS Multi-AZ DB instance in private subnets.
  • D Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnet.
  • E Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnets.
Suggested Answer: AD
NOTE: -