I think it depends quite a lot on where you draw a boundary between applications (i.e. what is your definition of an application), and what use-cases you take into consideration.
While you could implement a web browser as an amalgamation of wget/curl, a HTML/XML parser that would call simple application for each document node, a standalone JavaScript engine that would interact with all of this and a "simple" displayer that would "just" place the output of the above on the screen (and return inputs back to some core coordinating process) it would be even messier than (probably any) other today's browser.
As for piping the data to an external process - that is how it actually started. If you are concerned about the size of an average web-application code, yes they are often big (and often because they are a layer sitting above a platform written in an interpreted programming language rather than a "simple" application), but compare it to their equivalents. Email clients, office suites... you name it. All of these are quite complex and have too much functionality to be implemented as a couple of processes communicating through pipes. For the tasks you are using these applications for are often complex too. There are no good simple solutions to complex problems.
Maybe it's time to look a little beyond motivation behind the UNIX motto "applications that do a little but are good at it". Replace "applications" with "general modular units" and you arrive at one of the basic good programming practices: do things modularly, so that parts can be reused and developed separately. That's what really matters, IMHO (and the choice of programming language has very little to do with it).
p.s. (following the comment): In the strictest sense you are mostly right - web applications are not following the UNIX philosophy (of being split into several smaller standalone programs). Yet the whole concept of what an application is seems rather murky - sed could probably be considered to be an application in some situations, while it usually acts just as a filter.
Hence it depends on how literally you want to take it. If you use the usual definition of a process - something running as a single process (in the sense kernel sees it), then for example a PHP web application interpreted in httpd by a module is the exact opposite. Do loaded shared libraries still fall into the scope of a single process (because they use the same address space) or are they already something more separated (immutable from the point of the programmer, completely reusable and communicating through a well-defined API)?
On the other hand, most web applications today are split into client and server parts, that are running as separate processes - usually on different systems (and even physically separated hardware). These two parts communicate with each other through a well defined (usually textual) interface (XML/HTML/JSON over HTTP). Often (at least in the browser) there are several threads that are processing the client side of the application (JavaScript/DOM, input/output...), sometimes even a separate process running a plugin (Java, Flash,...). That sounds exactly like the original UNIX philosophy, especially on Linux, where threads are processes by (almost) any account.
In addition to that, the server parts are pretty much always split into several distinct pieces - separate database engine performing requested operations on structured (or unstructured) data is a canonical example. It just doesn't make much sense to integrate a database into a web server. Yet it also doesn't make much sense to split a database into several processes that would specialise in say only parsing requests, fetching data from data storage, filtering the data.... One has to strike balance between creating an omnipotent behemoth and a swarm of almost nilpotent workers that spend most of their time talking to each other.
As for the textual interface: note that what was true for data processed 40 years ago is not necessarily true today - binary formats are cheaper both in space and power required for de/serialization, and the amount of data is immensely larger.
Another important question also is, what has actually been the target of the UNIX philosophy? I don't think numerical simulations, banking systems or publicly accessible photo galleries/social networks have ever been. Maintenance of systems running these services however definitely has been and likely will be in even the future.