An AWS VPC can be seen as virtual network inside AWS.

  • VPC is per-region
  • regionally resilient (see subnets)
  • by default, private and isolated from other VPCs
  • Each VPC has a CIDR block
  • An account has max 1 default and many custom VPCs
    • default VPC
      • default VPC can be removed and recreated
      • some services expect default VPC to be present
      • not likely going to be used for actual deployment
      • Default VPCs always have the same CIDR block: 172.31.0.0/16
  • A VPC has 1 subnet for each AZ. If one AZ/subnet fails, other subnets can still keep the VPC up.
    • per-AZ subnets have /20 CIDR block
  • Each VPC have
    • Internet Gateway (IGW)
    • Security Group (SG)
    • Network ACL (NACL)
  • Default VPC assigns IPv4 addresses to its services.
  • Avoid default VPC for production

Custom VPC Sizing and Structure

  • When creating VPCs
    • Carefully consider IP range (CIDR block) for VPC in advance, since it cannot be changed later
    • What is the size of the VPC? How many services/instances will be in the VPC? Try to predict the future use cases.
    • Choose VPC range that does not conflict with other VPCs, vendors, or on-prem networkx.
    • VPC structure: tiers and resiliency (AZ)
  • Sizing: planning CIDR ranges
    • minimum size /28 (16 IPs), maximum /16 (65536 IPs)
    • avoid common ranges to prevent future issues, 10.0-10.10, so 10.16 would be a nice starting point
    • What is the highest number of regions that the business could use? Use 2+ unique networks per region per environment (general, prod, staging, dev) (e.g. us-east-2 dev #1 would have 10.16.0.0/16 and us-east-2 prod #1 would have 10.18.0.0/16)
  • Structuring
    • How many subnets will be used to organize services?
    • How many hosts/IPs will be necessary?
    • Determine number of AZs to use (e.g., 3 AZs for actual use, with a spare for reserve)
    • Split into environments (e.g., dev, staging, prod, and a spare)
      • More realistically, environment are placed in different member accounts and different VPCs.
    • Split into tiers (e.g. web, application, database, and a spare)
    • This means 4 AZs x 4 environments x 4 tiers = 64 subnets, which splits a /16 subnet to 64 /22 subnets (/18 → /24 etc).
    • VPC planning can also be planned bottom-up (determine required services first, etc)

Custom VPC Architecturing

  • Avoid using Bastion hosts for private VPC access
  • Traffic into and out from a VPC is denied by default
  • default vs dedicated tenancy: provision resource on shared hardware or dedicated hardware
    • Beware: if dedicated tenancy is configured on VPC level, then all resources in that VPC must use dedicated hardware (costly), while using default tenancy allows customizing per-resource tenancy later on
  • Each VPC has one primary private IPv4 CIDR block
    • min /28 (16 IPs) and max /16 (65,536 IPs)
    • optional: secondary IPv4 blocks can also be created
    • optional: single assigned IPv6 CIDR block, look towards using IPv6 as default in future; all IPv6 addresses are publicly routable (not private vs public block distinction), but access can be configured
  • VPC DNS service
    • VPC DNS IP: VPC base IP + 2 (10.0.0.0/16 → 10.0.0.2)
    • main settings
      • enableDnsHostnames: give instances with IP addresses a domain name
      • enableDnsSupport: enable DNS resolution in VPC
  • Subnets in a VPC can communicate to each other by default. Only access across VPC boundary is blocked.
  • Each subnet has 5 reserved IP addresses.
    • .0 = network address
    • .1 = VPC router network interface
    • .2 = reserved for DNS
    • .3 = reserved for future use
    • .255 = reserved; note that this actually cannot be used for broadcasting
  • 1 DHCP Options set per VPC; DHCP option sets can be created but not edited
  • Per-subnet IP address setting
    • can auto-assign public IPv4 to in-subnet resources
    • can auto-assign public IPv6 to in-subnet resources

VPC Routing

See: AWS Internet Gateway

  • VPC router is a HA device that routes traffic between subnets. The router can be configured with route tables.
    • Each subnet may have one route table max.
    • If no route table is configured for a subnet, the VPC main route table is used by default.
  • Route table is made of routes. A route is can be a single IP or a network range. A packet leaving the VPC is routed based on which route match (greatest subnet mask is prioritized, except…).
    • If route is local, the route matches an address within the VPC.
    • Local routes always take priority over other routes regardless of how high the subnet mask is.

Trying to access internet from private subnet?

If you have a compute in a private subnet and want to access internet without giving it a public IP, consider using a NAT instance such as fck-nat or alternat. These are much more affordable compared to using a NAT Gateway due to minimal hourly fees (EC2 is much cheaper) and their lack of egress cost.

Bastion Host/Jumpbox

  • A host (e.g. EC2 instance) used to access private VPC resources (private EC2 instances).
  • Usually accept limited IP addresses.
  • Better alternatives exist now.
  • For better security, only allow access to the bastion host through web console (Session Manager) or VPN (and the VPN shouldn’t allow SSH access from all IPs).

NACL

NACL: Network Access Control Lists

  • NACL is a firewall that can be assigned to a subnet. Traffic between resources within a subnet is not affected. Each subnet can only have one NACL, though a NACL can be assigned to multiple subnets.
  • NACL have 2 rulesets: inbound & outbound (not necessarily request and response!)
  • NACLs are stateless firewalls.
  • A rule can be explicit allow or explicit deny. .
  • Each rule has a rule number (inbound rules can have the same rule numbers as outbound rules; they don’t need to be unique across rulesets). Rules are evaluated in order of lowest rule number, and AWS applies the first rule that match the traffic. A wildcard rule number can be specified to default deny or allow all traffic that doesn’t match a rule.
  • Since it’s stateless, we need to allow all ephemeral ports for outbound traffic.
  • When an instance in subnet A talks with an instance in subnet B, we need to create four rules (two for A, two for B): A out (A sends request from ephemeral port), B in (B listens for request on a well-known port), B out (B sends response from ephemeral port), A in (A receives response from ephemeral port).
  • NACLs can be overkill. For most use cases, use security groups.