Do you know that you can access more real exam questions via Premium Access? ()
A company runs applications on Windows and Linux Amazon EC2 instances The instances run across multiple Availability Zones In an AWS Region. The company uses Auto Scaling groups for each application.
The company needs a durable storage solution for the instances. The solution must use SMB for Windows and must use NFS for Linux. The solution must also have sub-millisecond latencies. All instances will read and write the data.
Which combination of steps will meet these requirements? (Select THREE.)
Answer : A, B, D
* Create an Amazon Elastic File System (Amazon EFS) File System with Targets in Multiple Availability Zones:
Amazon EFS provides a scalable and highly available network file system that supports the NFS protocol. EFS is ideal for Linux instances as it allows multiple instances to read and write data concurrently.
Setting up EFS with targets in multiple Availability Zones ensures high availability and durability.
* Create an Amazon FSx for NetApp ONTAP Multi-AZ File System:
Amazon FSx for NetApp ONTAP offers a fully managed file storage solution that supports both SMB for Windows and NFS for Linux.
The Multi-AZ deployment ensures high availability and durability, providing sub-millisecond latencies suitable for the application's performance requirements.
* Update the User Data for Each Application's Launch Template to Mount the File System:
Updating the user data in the launch template ensures that every new instance launched by the Auto Scaling group will automatically mount the appropriate file system.
This step is necessary to ensure that all instances can access the shared storage without manual intervention.
Example user data for mounting EFS (Linux)
#!/bin/bash
sudo yum install -y amazon-efs-utils
sudo mount -t efs fs-12345678:/ /mnt/efs
Example user data for mounting FSx (Windows):
By implementing these steps, the company can provide a durable storage solution with sub-millisecond latencies that supports both SMB and NFS protocols, meeting the requirements for both Windows and Linux instances.
A company uses Amazon EC2 as its primary compute platform. A DevOps team wants to audit the company's EC2 instances to check whether any prohibited applications have been installed on the EC2 instances.
Which solution will meet these requirements with the MOST operational efficiency?
Answer : A
* Configure AWS Systems Manager on Each Instance:
AWS Systems Manager provides a unified interface for managing AWS resources. Install the Systems Manager agent on each EC2 instance to enable inventory management and other features.
* Use AWS Systems Manager Inventory:
Systems Manager Inventory collects metadata about your instances and the software installed on them. This data includes information about applications, network configurations, and more.
Enable Systems Manager Inventory on all EC2 instances to gather detailed information about installed applications.
* Use Systems Manager Resource Data Sync to Synchronize and Store Findings in an Amazon S3 Bucket:
Resource Data Sync aggregates inventory data from multiple accounts and regions into a single S3 bucket, making it easier to query and analyze the data.
Configure Resource Data Sync to automatically transfer inventory data to an S3 bucket for centralized storage.
* Create an AWS Lambda Function that Runs When New Objects are Added to the S3 Bucket:
Use an S3 event to trigger a Lambda function whenever new inventory data is added to the S3 bucket.
The Lambda function can parse the inventory data and check for the presence of prohibited applications.
* Configure the Lambda Function to Identify Prohibited Applications:
The Lambda function should be programmed to scan the inventory data for any known prohibited applications and generate alerts or take appropriate actions if such applications are found.
Example Lambda function in Python
import json
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
response = s3.get_object(Bucket=bucket, Key=key)
inventory_data = json.loads(response['Body'].read().decode('utf-8'))
prohibited_apps = ['app1', 'app2']
for instance in inventory_data['Instances']:
for app in instance['Applications']:
if app['Name'] in prohibited_apps:
# Send notification or take action
print(f'Prohibited application found: {app['Name']} on instance {instance['InstanceId']}')
return {'statusCode': 200, 'body': json.dumps('Check completed')}
By leveraging AWS Systems Manager Inventory, Resource Data Sync, and Lambda, this solution provides an efficient and automated way to audit EC2 instances for prohibited applications.
A company is refactoring applications to use AWS. The company identifies an internal web application that needs to make Amazon S3 API calls in a specific AWS account.
The company wants to use its existing identity provider (IdP) auth.company.com for authentication. The IdP supports only OpenID Connect (OIDC). A DevOps engineer needs to secure the web application's access to the AWS account.
Which combination of steps will meet these requirements? (Select THREE.)
Answer : B, D, E
So, this corresponds to Option B: Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.
Step 2: Creating an IAM Role with Specific Permissions Next, you need to create an IAM role with a trust policy that allows the external IdP to assume it when certain conditions are met. Specifically, the trust policy needs to allow the role to be assumed based on the context key auth.company.com:aud (audience claim in the token).
Action: Create an IAM role that has the necessary permissions (e.g., Amazon S3 access). The role's trust policy should specify the OIDC IdP as the trusted entity and validate the audience claim (auth.company.com:aud), which comes from the token provided by the IdP.
Why: This step ensures that only the specified web application authenticated via OIDC can assume the IAM role to make API calls.
This corresponds to Option D: Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.
Step 3: Using Temporary Credentials via AssumeRoleWithWebIdentity API To securely make Amazon S3 API calls, the web application will need temporary credentials. The web application can use the AssumeRoleWithWebIdentity API call to assume the IAM role configured in the previous step and obtain temporary AWS credentials. These credentials can then be used to interact with Amazon S3.
Action: The web application must be configured to call the AssumeRoleWithWebIdentity API operation, passing the OIDC token from the IdP to obtain temporary credentials.
Why: This allows the web application to authenticate via the external IdP and then authorize access to AWS resources securely using short-lived credentials.
This corresponds to Option E: Configure the web application to use the AssumeRoleWithWebIdentity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.
Summary of Selected Answers:
B: Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.
D: Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.
E: Configure the web application to use the AssumeRoleWithWebIdentity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.
This setup enables the web application to use OpenID Connect (OIDC) for authentication and securely interact with Amazon S3 in a specific AWS account using short-lived credentials obtained through AWS Security Token Service (STS).
A company uses an organization in AWS Organizations to manage several AWS accounts that the company's developers use. The company requires all data to be encrypted in transit.
Multiple Amazon S3 buckets that were created in developer accounts allow unencrypted connections. A DevOps engineer must enforce encryption of data in transit for all existing S3 buckets that are created in accounts in the organization.
Which solution will meet these requirements?
Answer : C
Step 2: Deploying a Conformance Pack with Managed Rules After AWS Config is enabled, you need to deploy a conformance pack that contains the s3-bucket-ssi-requests-only managed rule. This rule enforces that all S3 buckets only allow requests using Secure Socket Layer (SSL) connections (HTTPS).
Action: Deploy a conformance pack that uses the s3-bucket-ssi-requests-only rule. This rule ensures that only SSL connections (for encrypted data in transit) are allowed when accessing S3.
Why: This rule guarantees that data is encrypted in transit by enforcing SSL connections to the S3 buckets.
Step 3: Using an AWS Systems Manager Automation Runbook To automatically remediate the compliance issues, such as S3 buckets allowing non-SSL requests, a Systems Manager Automation runbook is deployed. The runbook will automatically add a bucket policy that denies access to any requests that do not use SSL.
Action: Use a Systems Manager Automation runbook that adds a bucket policy statement to deny access when the aws:SecureTransport condition key is false.
Why: This ensures that all S3 buckets across the organization comply with the policy of enforcing encrypted data in transit.
This corresponds to Option C: Turn on AWS Config for the organization. Deploy a conformance pack that uses the s3-bucket-ssi-requests-only managed rule and an AWS Systems Manager Automation runbook. Use a runbook that adds a bucket policy statement to deny access to an S3 bucket when the value of the aws:SecureTransport condition key is false.
A DevOps engineer is setting up an Amazon Elastic Container Service (Amazon ECS) blue/green deployment for an application by using AWS CodeDeploy and AWS CloudFormation. During the deployment window, the application must be highly available and CodeDeploy must shift 10% of traffic to a new version of the application every minute until all traffic is shifted.
Which configuration should the DevOps engineer add in the CloudFormation template to meet these requirements?
Answer : B
This corresponds to Option B: Add the AWS::CodeDeployBlueGreen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the CodeDeployDefault.ECSLinear10PercentEvery1Minutes deployment configuration.