This means that your cloud vendor is in charge of the security of the cloud, and you, the users, are also in charge of the security in the cloud. You get to decide what kind of asset or service to run, who will get access to it, and how exactly it will be configured.
But still, users skip this part, or at least don’t give it a deep dive. According to Gartner, by 2023 99% of cloud security failures will be the customer’s fault. That’s you again.
To make your life a little easier, here are a few steps I suggest, in order to improve your cloud security posture and avoid misconfiguration:
- The network layer: in AWS’s default settings, the outbound traffic is open and unrestricted. As a best practice, you should switch to enable access only where it’s necessary. Similarly, don’t allow all traffic just because it’s using known ports. Exfiltrated data can be wrapped with legitimate packets like ICMP or DNS requests.
- The secrets management: API keys, usernames and passwords, access tokens and other types of credentials are secrets you want to keep to yourself and perhaps use a mechanism to create temporary credentials. That’s what secret managers are for! And no matter which one you choose, never ever write secrets in plain text in your code, variables or comments. You’re going to put that in Git and we all know how that ends.
- The IAM (identity and access management): a great way to give granular access and action permissions to users and resources. There’s really no need to use a root user. Users and resources should be able to do only what they need. This is also known as the least privilege principle. And take advantage of native tools like the AWS Access Analyzer, to see if you have some unused permissions that you can delete. The biggest mistake with IAM is to go the lazy route and use excessive roles. Don’t get tempted to use the ‘*’ or the pre-configured roles that allow your assets to do too much.
- Logging and monitoring: this is the way to know about bad stuff that happens in (near) real time, and the only way for you to run a forensic analysis in case of a real incident. As mentioned, in the cloud an incident usually begins (99% of the time) with a user’s mistake. Logging and monitoring would allow you to detect that mistake before it is exploited. Log everything. In AWS for example, the Flow Logs default logging is only the ‘accept’ logs, leaving out the ‘reject’ logs. The rejected traffic can teach us a lot about failed attempts, brute forcing, scanning etc.
- Encryption: encrypt data at rest and in transit. Use your own keys (don’t forget to rotate), or use a native service. Enabling these is easier than you think and could prevent you from massive damage. If you ‘forgot’ your S3 bucket open to the world, at least the data will be encrypted.