Enforcing Squid Access Policies for Amazon S3 and Yum in an AWS VPC
This tutorial demonstrates how to configure an open‑source Squid proxy in an AWS VPC to restrict Internet access, allow only approved Amazon S3 buckets and Yum repositories, route traffic through specific gateways, and achieve high availability using Auto Scaling and Route 53.
The article walks through a complete example where Alice, a CTO, uses Squid as a web proxy to enforce strict outbound traffic policies for her company's AWS environment. First, a VPC is created with separate subnets and network ACLs that block direct Internet access, forcing all traffic through the Squid instance.
Squid is installed on an Amazon Linux EC2 instance ( sudo yum update -y and sudo yum install -y squid ) and configured via /etc/squid/squid.conf . The default localnet ACLs are replaced with a VPC‑wide CIDR rule ( acl localnet src 10.1.0.0/16 ) to limit requests to internal instances only.
To permit Yum repository access, a series of acl yum dstdomain entries are added for each AWS region, followed by an access rule http_access allow localnet yum . After restarting Squid ( sudo service squid restart ), the configuration is verified with curl -I http://www.google.com and Yum checks.
For Amazon S3, ACLs using dstdom_regex are created for all regions ( acl s3 dstdom_regex .*s3\.amazonaws\.com etc.) and an access rule http_access allow localnet s3 is added. Bucket‑specific whitelisting is achieved with two new ACLs: one for virtual‑host URLs ( acl virtual_host_urls dstdomain mybucket.s3.amazonaws.com ) and one for path‑style URLs ( acl path_urls url_regex s3\.amazonaws\.com/mybucket/.* ), then the corresponding http_access lines replace the generic S3 rule.
The guide explains why HTTPS path‑style URLs are blocked (the bucket name is encrypted) and mentions Squid’s SSL Bump feature as a possible solution, though the AWS CLI works with virtual‑host URLs without decryption.
To control outbound interfaces, Alice adds a second ENI in a “resource” subnet and uses tcp_outgoing_address directives to send Yum and S3 traffic via an Internet Gateway (IGW) and all other traffic via a Virtual Private Gateway (VGW). This enables low‑latency S3 access while routing other traffic to the on‑premises data center.
High availability is achieved by placing a single Squid instance in an Auto Scaling group with a fixed size of one and updating a Route 53 DNS record ( proxy.example.com ) via a startup script that calls aws route53 change-resource-record-sets . If the instance fails, the ASG launches a new one and the DNS update redirects clients automatically.
In conclusion, the solution shows how to replace IP‑based security rules with DNS‑based policies, enforce selective access to Yum and S3, route traffic based on destination, and build a resilient proxy architecture on AWS.
Architects Research Society
A daily treasure trove for architects, expanding your view and depth. We share enterprise, business, application, data, technology, and security architecture, discuss frameworks, planning, governance, standards, and implementation, and explore emerging styles such as microservices, event‑driven, micro‑frontend, big data, data warehousing, IoT, and AI architecture.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.