I grabbed the GET requests for a visit to www.microsoft.com from the HTTP headers for two different web requests, one using a proxy and the other not using a proxy (which are the type of requests a transparent proxy gets)
No proxy (or transparent proxy):
GET / HTTP/1.1
GET http://www.microsoft.com/ HTTP/1.1
See how only the requests with a proxy configured include the hostname and protocol in the URL? This gives the proxy enough information on how to connect to the required destination host, tells it what protocol to use (http or https) and also allows alternate ports to be specified (e.g. GET http://www.satan.com:666/bad/evil.html HTTP/1.1)
The hostname is not specified in the request without a proxy, because the browser believes it is making a direct TCP connection to the destination web server, so there is no point in specifying the complete URL in the GET request. The transparent proxy has to determine the destination system by using the destination IP address in the packets it receives, and it has to assume the port because it cant get that from the TCP header because its been modified to allow the traffic to be redirected to the proxy service.
Making it work with the IP forwarding system is much more complicated, would be much harder to maintain portability for, has the potential to slow down all traffic, would require it to understand all possible higher level protocols, etc, etc. It would definitely be useful, but probably not that easy to actually implement.
Check with the Squid devs if you wish however, they may be able to provide more info...