Do you know that you can access more real exam questions via Premium Access? ()
A company is running a custom-built application that processes records. All the components run on Amazon EC2 instances that run in an Auto Scaling group. Each record's processing is a multistep sequential action that is compute-intensive. Each step is always completed in 5 minutes or less.
A limitation of the current system is that if any steps fail, the application has to reprocess the record from the beginning The company wants to update the architecture so that the application must reprocess only the failed steps.
What is the MOST operationally efficient solution that meets these requirements?
Answer : D
* Use AWS Step Functions to Orchestrate Processing:
AWS Step Functions allow you to build distributed applications by combining AWS Lambda functions or other AWS services into workflows.
Decoupling the processing into Step Functions tasks enables you to retry individual steps without reprocessing the entire record.
* Architectural Steps:
Create a web application to pass records to AWS Step Functions:
The web application can be a simple frontend that receives input and triggers the Step Functions workflow.
Define a Step Functions state machine:
Each step in the state machine represents a processing stage. If a step fails, Step Functions can retry the step based on defined conditions.
Use AWS Lambda functions:
Lambda functions can be used to handle each processing step. These functions can be stateless and handle specific tasks, reducing the complexity of error handling and reprocessing logic.
* Operational Efficiency:
Using Step Functions and Lambda improves operational efficiency by providing built-in error handling, retries, and state management.
This architecture scales automatically and isolates failures to individual steps, ensuring only failed steps are retried.
A company has an organization in AWS Organizations. A DevOps engineer needs to maintain multiple AWS accounts that belong to different OUs in the organization. All resources, including 1AM policies and Amazon S3 policies within an account, are deployed through AWS CloudFormation. All templates and code are maintained in an AWS CodeCommit repository Recently, some developers have not been able to access an S3 bucket from some accounts in the organization.
The following policy is attached to the S3 bucket.
What should the DevOps engineer do to resolve this access issue?
Answer : D
Verify No SCP Blocking Access:
Ensure that no Service Control Policy (SCP) is blocking access for developers to the S3 bucket. SCPs are applied at the organization or organizational unit (OU) level in AWS Organizations and can restrict what actions users and roles in the affected accounts can perform.
Verify No IAM Policy Permissions Boundaries Blocking Access:
IAM permissions boundaries can limit the maximum permissions that a user or role can have. Verify that these boundaries are not restricting access to the S3 bucket.
Make Necessary Changes to SCP and IAM Policy Permissions Boundaries:
Adjust the SCPs and IAM permissions boundaries if they are found to be the cause of the access issue. Make sure these changes are reflected in the code maintained in the AWS CodeCommit repository.
Invoke Deployment Through CloudFormation:
Commit the updated policies to the CodeCommit repository.
Use AWS CloudFormation to deploy the changes across the relevant accounts and resources to ensure that the updated permissions are applied consistently.
By ensuring no SCPs or IAM policy permissions boundaries are blocking access and making necessary changes if they are, the DevOps engineer can resolve the access issue for developers trying to access the S3 bucket.
A company is developing a web application's infrastructure using AWS CloudFormation The database engineering team maintains the database resources in a Cloud Formation template, and the software development team maintains the web application resources in a separate CloudFormation template. As the scope of the application grows, the software development team needs to use resources maintained by the database engineering team However, both teams have their own review and lifecycle management processes that they want to keep. Both teams also require resource-level change-set reviews. The software development team would like to deploy changes to this template using their Cl/CD pipeline.
Which solution will meet these requirements?
Answer : A
* Stack Export and Import:
Use the Export feature in CloudFormation to share outputs from one stack (e.g., database resources) and use them as inputs in another stack (e.g., web application resources).
* Steps to Create Stack Export:
Define the resources in the database CloudFormation template and use the Outputs section to export necessary values.
Outputs:
DBInstanceEndpoint:
Value: !GetAtt DBInstance.Endpoint.Address
Export:
Name: DBInstanceEndpoint
Steps to Import into Web Application Stack:
In the web application CloudFormation template, use the ImportValue function to import these exported values.
Resources:
MyResource:
Type: 'AWS::SomeResourceType'
Properties:
SomeProperty: !ImportValue DBInstanceEndpoint
Resource-Level Change-Set Reviews:
Both teams can continue using their respective review processes, as changes to each stack are managed independently.
Use CloudFormation change sets to preview changes before deploying.
By exporting resources from the database stack and importing them into the web application stack, both teams can maintain their separate review and lifecycle management processes while sharing necessary resources.
A company uses Amazon RDS for all databases in Its AWS accounts The company uses AWS Control Tower to build a landing zone that has an audit and logging account All databases must be encrypted at rest for compliance reasons. The company's security engineer needs to receive notification about any noncompliant databases that are in the company's accounts
Which solution will meet these requirements with the MOST operational efficiency?
Answer : A
Activate AWS Control Tower Guardrail:
Use AWS Control Tower to activate a detective guardrail that checks whether RDS storage is encrypted.
Create SNS Topic for Notifications:
Set up an Amazon Simple Notification Service (SNS) topic in the audit account to receive notifications about non-compliant databases.
Create EventBridge Rule to Filter Non-compliant Events:
Create an Amazon EventBridge rule that filters events related to the guardrail's findings on non-compliant RDS instances.
Configure the rule to send notifications to the SNS topic when non-compliant events are detected.
Subscribe Security Engineer's Email to SNS Topic:
Subscribe the security engineer's email address to the SNS topic to receive notifications when non-compliant databases are detected.
By using AWS Control Tower to activate a detective guardrail and setting up SNS notifications for non-compliant events, the company can efficiently monitor and ensure that all RDS databases are encrypted at rest.
A company has an AWS Control Tower landing zone. The company's DevOps team creates a workload OU. A development OU and a production OU are nested under the workload OU. The company grants users full access to the company's AWS accounts to deploy applications.
The DevOps team needs to allow only a specific management 1AM role to manage the 1AM roles and policies of any AWS accounts In only the production OU.
Which combination of steps will meet these requirements? {Select TWO.)
Answer : B, E
You need to understand how SCP inheritance works in AWS. The way it works for Deny policies is different that allow policies.
Allow polices are passing down to children ONLY if they don't have an allow policy.
Deny policies always pass down to children.
That's why there is always an SCP set to the Root to allow everything by default. If you limit this policy, the whole organization will be limited, not matter what other policies are saying for the other OUs. So it's not A. It's not D because it restricts the wrong OU.