Amazon - Limited Time Discount Offer - Ends In 1d 00h 00m 00s Coupon code: Y2430OFF
  1. Home
  2. Amazon
  3. DBS-C01 Dumps
  4. Free DBS-C01 Questions

Free DBS-C01 Questions for Amazon DBS-C01 Exam as PDF & Practice Test Software

Page:    1 / 14   
Total 322 questions

Question 1

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. The company recently conducted tests on the database after business hours, and the tests generated additional database logs. As a result, free storage of the DB instance is low and is expected to be exhausted in 2 days.

The company wants to recover the free storage that the additional logs consumed. The solution must not result in downtime for the database.

Which solution will meet these requirements?



Answer : B

Correct Answer: B

Explanation from Amazon documents:

The rds.log_retention_period parameter specifies how long your RDS for PostgreSQL DB instance keeps its log files. The default setting is 3 days (4,320 minutes), but you can set this value to anywhere from 1 day (1,440 minutes) to 7 days (10,080 minutes)123. By reducing the log retention period, you can free up storage space on your DB instance without affecting its availability or performance.

To modify the rds.log_retention_period parameter, you need to use a custom DB parameter group for your RDS for PostgreSQL instance. You can modify the parameter value using the AWS Management Console, the AWS CLI, or the RDS API1. The parameter change is applied immediately, but it may take up to 24 hours for the database logs to be deleted2. Therefore, you do not need to reboot the DB instance to save the changes or to reclaim the storage space.

Therefore, option B is the correct solution to meet the requirements. Option A is incorrect because setting the rds.log_retention_period parameter to 0 disables log retention and prevents you from viewing or downloading any database logs1. Rebooting the DB instance is also unnecessary and may cause downtime. Option C is incorrect because the temp file_limit parameter controls the maximum size of temporary files that a session can generate, not the size of database logs. Modifying this parameter will not reclaim any storage space on the DB instance. Option D is incorrect because rebooting the DB instance is not required to save the changes or to reclaim the storage space.


Question 2

A large financial services company uses Amazon ElastiCache for Redis for its new application that has a global user base. A database administrator must develop a caching solution that will be available

across AWS Regions and include low-latency replication and failover capabilities for disaster recovery (DR). The company's security team requires the encryption of cross-Region data transfers.

Which solution meets these requirements with the LEAST amount of operational effort?



Answer : B

Correct Answer: B

Explanation from Amazon documents:

Amazon ElastiCache for Redis is a fully managed in-memory data store that supports Redis, an open source, key-value database. Amazon ElastiCache for Redis provides several features to enhance the performance, availability, scalability, and security of your Redis data, such as cluster mode, global datastore, replication groups, snapshots, and encryption.

A global datastore is a feature that allows you to create a cross-Region read replica of your ElastiCache for Redis cluster. A global datastore consists of a primary cluster that is replicated across up to two other Regions as secondary clusters. A global datastore provides low-latency reads and high availability for your Redis data across Regions. A global datastore also supports encryption of cross-Region data transfers using AWS Key Management Service (AWS KMS).

To create a global datastore in ElastiCache for Redis, you need to do the following:

Create a primary cluster in one Region. You can use an existing cluster or create a new one. The cluster must have cluster mode enabled and use Redis engine version 5.0.6 or later.

Create a global datastore and add the primary cluster to it. You can use the AWS Management Console, the AWS CLI, or the ElastiCache API to create a global datastore.

Create one or two secondary clusters in other Regions and add them to the global datastore. The secondary clusters must have the same specifications as the primary cluster, such as node type, number of shards, and number of replicas per shard.

Enable encryption in transit and at rest for the primary and secondary clusters. Specify a customer master key (CMK) from AWS KMS for each cluster.

By creating a global datastore in ElastiCache for Redis and creating replica clusters in two other Regions, the database administrator can develop a caching solution that will be available across Regions and include low-latency replication and failover capabilities for DR. This solution will also meet the security requirement of encrypting cross-Region data transfers using AWS KMS. This solution will also require the least amount of operational effort, as it does not involve any data migration or manual intervention.

Therefore, option B is the correct solution to meet the requirements. Option A is not optimal because enabling cluster mode in ElastiCache for Redis and creating multiple clusters across Regions will not provide cross-Region replication or failover capabilities. Using AWS DMS to replicate the cache data by using AWS DMS will incur additional time and cost, and may not support encryption of cross-Region data transfers. Option C is not optimal because disabling cluster mode in ElastiCache for Redis and creating multiple replication groups across Regions will not provide cross-Region replication or failover capabilities. Using AWS DMS to replicate the cache data will incur additional time and cost, and may not support encryption of cross-Region data transfers. Option D is not optimal because creating a snapshot of ElastiCache for Redis in the primary Region and copying it to the failover Region will not provide low-latency replication or high availability for the Redis data across Regions. Using the snapshot to restore the cluster from the failover Region when DR is required will involve manual intervention and downtime.


Question 3

A media company hosts a highly available news website on AWS but needs to improve its page load time, especially during very popular news releases. Once a news page is published, it is very unlikely to change unless an error is identified. The company has decided to use Amazon ElastiCache.

What is the recommended strategy for this use case?



Answer : A

The recommended strategy for this use case is option A: use ElastiCache for Memcached with write-through and long time to live (TTL).

Amazon ElastiCache is a fully managed in-memory data store service that supports two open source engines: Memcached and Redis. Amazon ElastiCache can be used to improve the performance and scalability of web applications by caching frequently accessed data in memory, reducing the load and latency of database queries.

Memcached and Redis have different features and use cases. Memcached is a simple, high-performance, distributed caching system that supports a large number of concurrent connections and large object sizes. Redis is an advanced, feature-rich, in-memory data structure store that supports data persistence, replication, transactions, pub/sub, Lua scripting, and various data types.

For this use case, Memcached is more suitable than Redis because the news website does not need the advanced features of Redis, such as data persistence or replication. The news website only needs a fast and simple caching solution that can handle high traffic and large objects.

Write-through and lazy loading are two common caching strategies that determine when and how data is written to the cache. Write-through is a strategy that writes data to the cache whenever it is written to the database. Lazy loading is a strategy that writes data to the cache only when it is requested for the first time.

For this use case, write-through is more suitable than lazy loading because the news website needs to improve its page load time, especially during very popular news releases. Write-through ensures that the cache always has the most up-to-date data and avoids cache misses or stale data. Lazy loading may cause cache misses or stale data if the data is not cached or updated in time.

Time to live (TTL) is a parameter that specifies how long an item can remain in the cache before it expires and is deleted. TTL can be used to control the cache size and freshness.

For this use case, long TTL is more suitable than short TTL because the news website has a low probability of changing its data once a news page is published. Long TTL allows the data to stay in the cache longer and reduces the frequency of cache updates or evictions. Short TTL may cause unnecessary cache updates or evictions if the data does not change frequently.

Therefore, option A is the recommended strategy for this use case because it uses ElastiCache for Memcached with write-through and long TTL, which provides a fast and simple caching solution that can handle high traffic and large objects, and ensures that the cache always has the most up-to-date and relevant data.


Question 4

A company is planning to migrate a 40 TB Oracle database to an Amazon Aurora PostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS) task within a single replication instance. During early testing, AWS DMS is not scaling to the company's needs. Full load and change data capture (CDC) are taking days to complete.

The source database server and the target DB cluster have enough network bandwidth and CPU bandwidth for the additional workload. The replication instance has enough resources to support the replication. A database specialist needs to improve database performance, reduce data migration time, and create multiple DMS tasks.

Which combination of changes will meet these requirements? (Choose two.)



Answer : B, D

Correct Answer: B and D

Explanation from Amazon documents:

AWS Database Migration Service (AWS DMS) is a service that helps you migrate data from one data source to another. AWS DMS supports full load and change data capture (CDC) modes, which enable you to migrate data with minimal downtime. AWS DMS also supports parallel load, which allows you to load data from multiple tables or partitions concurrently.

To improve database performance, reduce data migration time, and create multiple DMS tasks, the database specialist should use the following combination of changes:

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value. This change will allow the database specialist to split the migration workload into smaller and more manageable units, and increase the parallelism of the full load process. The MaxFullLoadSubTasks parameter specifies the maximum number of tables that are loaded in parallel for each DMS task. By setting this parameter to a higher value, the database specialist can increase the throughput and performance of the full load process.

Use parallel load with different data boundaries for larger tables. This change will allow the database specialist to divide the larger tables into smaller chunks based on a partition key or a range of values, and load them in parallel using multiple DMS tasks. Parallel load can significantly reduce the migration time and improve the performance of large tables.

Therefore, option B and D are the correct combination of changes to meet the requirements. Option A is incorrect because increasing the value of the ParallelLoadThreads parameter in the DMS task settings for the tables will not improve the performance or reduce the migration time significantly. The ParallelLoadThreads parameter specifies the number of threads that are used to load data from a single table or partition. By increasing this parameter, the database specialist may increase the CPU utilization and network bandwidth consumption of the source and target databases, but not the parallelism of the full load process. Option C is incorrect because using a smaller set of tables with each DMS task and setting the MaxFullLoadSubTasks parameter to a lower value will decrease the parallelism and performance of the full load process. Option E is incorrect because running the DMS tasks on a larger instance class and increasing local storage on the instance will not address the root cause of the performance issue, which is the lack of parallelism and partitioning of the large tables.


Question 5

Page:    1 / 14   
Total 322 questions