AWS Certified Database Specialty-(DBS-C01)

The AWS Certified Database Specialty-(DBS-C01) were last updated on today.
  • Viewing page 4 out of 68 pages.
  • Viewing questions 16-20 out of 340 questions
Disclaimers:
  • - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.and Azure
  • - Trademarks, certification & product names are used for reference only and belong to Amazon.and Azure

Topic 1 - Exam A

Question #16 Topic 1

A social media company is using Amazon DynamoDB to store user profile data and user activity data. Developers are reading and writing the data, causing the size of the tables to grow significantly. Developers have started to face performance bottlenecks with the tables. Which solution should a database specialist recommend to read items the FASTEST without consuming all the provisioned throughput for the tables?

  • A Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.
  • B Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.
  • C Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read a single item that has a specific primary key. Use the BatchGetItem API operation to read multiple items.
  • D Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read a single item that has a specific primary key Use the BatchGetItem API operation to read multiple items.
Suggested Answer: B
NOTE:
Question #17 Topic 1

A company stores session history for its users in an Amazon DynamoDB table. The company has a large user base and generates large amounts of session data. Teams analyze the session data for 1 week, and then the data is no longer needed. A database specialist needs to design an automated solution to purge session data that is more than 1 week old. Which strategy meets these requirements with the MOST operational efficiency?

  • A Create an AWS Step Functions state machine with a DynamoDB DeleteItem operation that uses the ConditionExpression parameter to delete items older than a week. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that runs the Step Functions state machine on a weekly basis.
  • B Create an AWS Lambda function to delete items older than a week from the DynamoDB table. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that triggers the Lambda function on a weekly basis.
  • C Enable Amazon DynamoDB Streams on the table. Use a stream to invoke an AWS Lambda function to delete items older than a week from the DynamoDB table
  • D Enable TTL on the DynamoDB table and set a Number data type as the TTL attribute. DynamoDB will automatically delete items that have a TTL that is less than the current time.
Suggested Answer: B
NOTE: Reference: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-run-lambda-schedule.html
Question #18 Topic 1

A company is loading sensitive data into an Amazon Aurora MySQL database. To meet compliance requirements, the company needs to enable audit logging on the Aurora MySQL DB cluster to audit database activity. This logging will include events such as connections, disconnections, queries, and tables queried. The company also needs to publish the DB logs to Amazon CloudWatch to perform real-time data analysis. Which solution meets these requirements?

  • A Modify the default option group parameters to enable Advanced Auditing. Restart the database for the changes to take effect.
  • B Create a custom DB cluster parameter group. Modify the parameters for Advanced Auditing. Modify the cluster to associate the new custom DB parameter group with the Aurora MySQL DB cluster.
  • C Take a snapshot of the database. Create a new DB instance, and enable custom auditing and logging to CloudWatch. Deactivate the DB instance that has no logging.
  • D Enable AWS CloudTrail for the DB instance. Create a filter that provides only connections, disconnections, queries, and tables queried.
Suggested Answer: A
NOTE: Reference: https://aws.amazon.com/blogs/database/configuring-an-audit-log-to-capture-database-activities-for-amazon-rds-for-mysql-and-amazon-aurora-with- mysql-compatibility/
Question #19 Topic 1

A company hosts an on-premises Microsoft SQL Server Enterprise edition database with Transparent Data Encryption (TDE) enabled. The database is 20 TB in size and includes sparse tables. The company needs to migrate the database to Amazon RDS for SQL Server during a maintenance window that is scheduled for an upcoming weekend. Data-at-rest encryption must be enabled for the target DB instance. Which combination of steps should the company take to migrate the database to AWS in the MOST operationally efficient manner? (Choose two.)

  • A Use AWS Database Migration Service (AWS DMS) to migrate from the on-premises source database to the RDS for SQL Server target database.
  • B Disable TDE. Create a database backup without encryption. Copy the backup to Amazon S3.
  • C Restore the backup to the RDS for SQL Server DB instance. Enable TDE for the RDS for SQL Server DB instance.
  • D Set up an AWS Snowball Edge device. Copy the database backup to the device. Send the device to AWS. Restore the database from Amazon S3.
  • E Encrypt the data with client-side encryption before transferring the data to Amazon RDS.
Suggested Answer: AC
NOTE: Reference: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html
Question #20 Topic 1

A company hosts a 2 TB Oracle database in its on-premises data center. A database specialist is migrating the database from on premises to an Amazon Aurora PostgreSQL database on AWS. The database specialist identifies a problem that relates to compatibility Oracle stores metadata in its data dictionary in uppercase, but PostgreSQL stores the metadata in lowercase. The database specialist must resolve this problem to complete the migration. What is the MOST operationally efficient solution that meets these requirements?

  • A Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation.
  • B Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convert-lowercase.
  • C Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase.
  • D Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase.
Suggested Answer: B
NOTE: