Stable release 3.1 beta / 3.0 / 2.7 / March 2, 2009
Written in C/C++ (Squid 3)
Operating system Cross-platform
Type web cache, proxy server
License GNU General Public License
Website: www.squid-cache.org
Squid is a proxy server and web cache daemon. It has a wide variety of uses, from speeding up a web server by caching repeated requests; to caching web, DNS and other computer network lookups for a group of people sharing network resources; to aiding security by filtering traffic. Although primarily used for HTTP and FTP, Squid includes limited support for several other protocols including TLS, SSL, Internet Gopher and HTTPS.[1] The development version of Squid (3.1) includes IPv6 and ICAP support. Squid web site claims that if working in front of the server application, it can improve performance by up to four times. Squid is especially efficient in case of (probably unexpected) high traffic to one or several particular pages, as in this case near 100% of caching can be achieved.
Squid was originally developed by Duane Wessels as the Harvest object cache, part of the Harvest project at the University of Colorado at Boulder.[2] [3] Further work on the program was completed at the University of California, San Diego and funded via two grants from the National Science Foundation.[4] Squid is now developed almost exclusively through volunteer efforts.
Squid is primarily designed to run on Unix-like systems but it also runs on Windows-based systems. Released under the GNU General Public License, Squid is free software.
Web proxy:
Caching is a way to store requested Internet objects (e.g. data like web pages) available via the HTTP, FTP, and Gopher protocols on a system closer to the requesting site. Web browsers can then use the local Squid cache as a proxy HTTP server, reducing access time as well as bandwidth consumption. This is often useful for Internet service providers to increase speed to their customers, and LANs that share an Internet connection. Because it is also a proxy (i.e. it behaves like a client on behalf of the real client), it can provide some anonymity and security. However, it also can introduce significant privacy concerns as it can log a lot of data including URLs requested, the exact date and time, the name and version of the requester's web browser and operating system, and the referer.
A client program (e.g. browser) either has to specify explicitly the proxy server it wants to use (typical for ISP customers), or it could be using a proxy without any extra configuration: “transparent caching”, in which case all outgoing HTTP requests are intercepted by Squid and all responses are cached. The latter is typically a corporate set-up (all clients are on the same LAN) and often introduces the privacy concerns mentioned above.
Squid has some features that can help anonymize connections, such as disabling or changing specific header fields in a client's HTTP requests. Whether these are set, and what they are set to do, is up to the person who controls the computer running Squid. People requesting pages through a network which transparently uses Squid may not know whether this information is being logged.[5] Within UK organisations at least, users should be informed if computers or internet connections are being monitored.
Reverse proxy:
The above setup—caching the contents of an unlimited number of webservers for a limited number of clients—is the classical one. Another setup is “reverse proxy” or “webserver acceleration” (using http_port 80 accel vhost). In this mode, the cache serves an unlimited number of clients for a limited number of—or just one—web servers.
As an example, if slow.example.com is a “real” web server, and www.example.com is the Squid cache server that “accelerates” it, the first time any page is requested from www.example.com, the cache server would get the actual page from slow.example.com, but later requests would get the stored copy directly from the accelerator (for a configurable period, after which the stored copy would be discarded). The end result, without any action by the clients, is less traffic to the source server, meaning less CPU and memory usage, and less need for bandwidth. This does, however, mean that the source server cannot accurately report on its traffic numbers without additional configuration, as all requests would seem to have come from the reverse proxy. A way to adapt the reporting on the source server is to use the X-Forwarded-For HTTP header reported by the reverse proxy, to get the real client's IP address.
It is possible for a single Squid server to serve both as a normal and a reverse proxy simultaneously. For example, a business might host its own website on a web server, with a Squid server acting as a reverse proxy between clients (customers accessing the website from outside the business) and the web server. The same Squid server could act as a classical web cache, caching HTTP requests from clients within the business (i.e. employees accessing the internet from their workstations), so accelerating web access and reducing bandwidth demands.
Supported platforms:
Squid can run on the following operating systems:
AIX
BSDI
Digital Unix
FreeBSD
HP-UX
IRIX
Linux
Mac OS X
NetBSD
NeXTStep
OpenBSD
SCO OpenServer
Solaris
UnixWare
Windows
Comments
Post a Comment