1

tl;dr What is the best / the most reliable way of testing and comparing performance of XAMPP / Apache server running on different computers and cloud services? If I need a high-level result (just a compare of performance between different option) will measuring browser response times be enough? Or is there any better option?


I am using XAMPP in three versions (with PHP PHP 7.4, 8.0 and 8.1) on four different gears:

  • Very, very old PC with Windows XP
  • Old PC with Windows 10
  • Very new and ultra fast PC with Windows 11
  • Raspberry Pi 3B+

In addition, I am also using Apache server (but not entire XAMPP) on 12 years old NAS with Linux and in a couple of cloud solutions from AWS, Digital Ocean and OVH.

In the same time I am a complete newbie when it comes into performance testing. What would be the best / the most promising or reliable way of testing speed of serving websites / processing API requests on all these gears?

The scope or expectations from such test is a quite very high-level in my case. I just want to learn myself:

  • If using given piece of hardware is or isn't pointless (in case of actual computers)
  • If there is any clearly readable correlation between service configuration / monthly fee and web server performance (in case of cloud services)

Thus, I don't expect any sophisticated results, reports or number. Just one (or more) factors that will allow me to distinguish / compare results of the same piece of software (that I normally use) running on different hardware and cloud options.

Since I am a complete beginner to this topic, the only idea that came to my mind is to manually invoke each request directly from my browser and measure response time as reported by browser's dev tools. Is there anything better?


EDIT: To clarify the question. I don't want to test a hardware. I want to test:

  • Exactly the same piece of software (XAMPP / Apache httpd)
  • In exactly the same version
  • Running on few different hardware (or cloud) platforms

I am an author of a several webapps (but I am purely a developer, not a dev ops hence the question). All of them are purely internal (intranets) and not public at all. All of them are:

  • Served using exactly the same platform (XAMPP) in exactly the same version
  • Developed using the same language (PHP), framework (Yii 2) and version
  • Developed by the same team and using the same coding principles

Based on above we can assume (can we?) that even though these are all different webapps, their difference is nearly purely business (functional) and therefore their performance will be strongly connected to the number of visitors and the hardware used to serve it.

Some of these apps are visited / used by 3-5 persons per day, some of then -- by 50 per hour (at most). By getting an unique solution to test performance of XAMPP under different gears I want to answer myself questions like:

  • Is using that very old PC enough to serve the XYZ app, with 3-5 visitors per day or is using it pointless / nonsense?
  • Can I use that fairly new laptop as a secondary (backup) server for the ABC app that has 25 visits per hour at most or is such hardware not enough?

Sorry, if you see my question and affords pointless, but I an an eco-maniac and I want to do virtually everything to give my old hardware a second life, second chance or second assignment. Trashing it is the very, very last scenario that I consider. Hence this question.

trejder
  • 731
  • 1
  • 11
  • 23
  • 2
    Xampp, that is just a platform. How some hardware performs completely depends on what kind of application you run on that. And I mean: completely. There's not one golden test that gives a "best" number for any application. A music streaming service will behave differently than a microblogging service will behave differently than a photo gallery will behave differently than a online office suite... So. Define what your use case actually is. "Xampp" is not a use case, it's just a tool. (Without doing that, your question is both too unspecific and only asking for opinions) – Marcus Müller May 29 '22 at 21:20
  • Thank you for a detailed answer and for pointing out an obvious mistakes on my side. Now I can clearly see that my original question was asked at a too high level. Please, take a look at my extended and clarified question. Is it now any closer to be valid and answerable? Or is it still an off-topic (here) opinion-based question? Thank you. – trejder May 31 '22 at 06:53
  • being ecologically responsible probably means you should do *anything* but run an old PC or laptop as server, if you could alternatively let your application be hosted in a large data center! The power used by your laptop will be in no good relation to the power used by a small VM on a cluster that runs literal millions of VMs, with much less power than it needs to run millions of your laptops – Marcus Müller May 31 '22 at 07:01
  • but in terms of the comparison: still boils down to what *your* application actually *does*. Not familiar with Yii2, but it just seems to be a MVC framework. You can use that for a lot of very different use cases. If you want a realistic benchmark, then you'll have to benchmark your actual application. Sure, many benchmarks will be highly correlated, but especially IO dependency profile of workloads differs immensely, and no general statements are possible beyond the obvious kind, like "well, 2021's a 16-core server with 128 GB RAM and NVMe SSDs is probably faster than your 15 year old laptop" – Marcus Müller May 31 '22 at 07:04
  • OK, thank you. And asking (you or anyone else) on exactly which benchmark you're talking about (as I don't know any reliable for benchmarking a website or server performance) would make this question even more opinion based and off-topic (as a pure software recommendation), right? – trejder May 31 '22 at 07:10
  • no, it would make the question depend on your application! You're the one who offers some service; for example, let's say, a book library administration tool that can automatically fetch cover images, search for books, insert books. So, write an endpoint that triggers these typical workloads. Trigger it a couple hundred to thousand times (depending on how long they'd take), and measure how long it takes on one hand, and for on-premises hardware, how much power a 20€ power meter shows you use while idling and while executing these workloads on the other. – Marcus Müller May 31 '22 at 07:13
  • Maybe the confusion comes from the fact that you're looking at the too low level? I am not looking for a benchmark that will test a particular functionality (or perform any kind of end-to-end test), but that will simply compare some hardwares serving a website or web app in general. That's why I suggested checking webpage loading times in my original question and asked, if there is any other idea to test this. – trejder May 31 '22 at 07:32
  • I'm getting a bit tired of repeating this: there's no "general website performance benchmark". This really makes no sense. It's comparable to: "How much power does an aircraft consume? I will not tell you whether I mean a Boeing 747, a spaceshuttle, a minipropeller machine or a quadcopter, nor how loaded it is. But please give me a good benchmark!" I mean, you're a programmer. You **know** that it makes a different what your program does. Why are we even arguing about this? – Marcus Müller May 31 '22 at 07:41

1 Answers1

2

but I an an eco-maniac and I want to do virtually everything to give my old hardware a second life, second chance or second assignment.

That's admirable, but old hardware is power-hungry, and running your own servers locally means you're running a full server for you, and your workload alone. There's nobody else that shares the server's power consumption, even when you're barely using it!

You'll need to factor that in. Let us make a s

Very, very old PC with Windows XP

Idles at 20-30 W, probably. Fully used, might use 200W (?). Really depends! Buy an energy meter, figure out how much your hardware uses during idle, and during a workload that simulates your actual use case.
You're the one who offers some service; for example, let's say, a book library administration tool that can automatically fetch cover images, search for books, insert books. So, write an endpoint that triggers these typical workloads. Trigger it a couple hundred to thousand times (depending on how long they'd take), and measure how long it takes on one hand, and put that into relation.

Nobody but you can make a setup that evaluates how long your specific hardware needs for your specific workloads, and how much power it will use over a year.

This is going to be a bit of spreadsheet work! Make a spreadsheet that lists how many hours per year (or: the expected remaining run time) your compute hardware uses in power. You do that by making a lower and upper and most-likely estimate for the amount of workloads it serves during that time – and with workloads, I mean units as simulated by your self-written benchmark above. Say, that benchmark serves 10,000 simulated user interactions, and took 20 minutes to complete. You sit down and scratch your head (or better: you read your own deployment server's statistics) and estimate your users make 200,000 to 2,000,000 requests a year, your best guess being 500,000. So, that's 200,000/10,000 · 20 minutes = 400 minutes ~= 6.67 h to 2,000,000/10,000 · 20 minutes = 4000 minutes = 66.7 h of hard work per year. The year has 8760 minutes. Now you know how many hours your system uses the "idle" current consumption, and how many hours it uses the "full load" power consumption.

In hour examples:

Machine: Old PC

  • Power idle: 20 W
  • Power working: 200 W
  • Time per 10,000 requests: 0.33 h
  • scenarios:
    • low load (200,000 requests per year)
      • 6.67 h work · 200 W = 1.33 kWh
      • 8753.33 h idle · 20 W = 175 kWh
      • total: 176 kWh
    • high load (2,000,000 requests per year)
      • 66.7 h work · 200 W = 13.3 kWh
      • 8693.3 h idle · 20 W = 173 kWh
      • total: 186.3 kWh

Machine: Very fast PC

  • Power idle: 15 W (modern PCs are better at idling!)
  • Power working: 500 W
  • Time per 10,000 requests: 0.006 h
  • scenarios:
    • low load (200,000 requests per year)
      • 0.0667 h work · 500 W = 0.03 kWh
      • 8670 h idle · 15 W = 130 kWh
      • total: 130 kWh …

Now, for cloud providers you simply can't know what your workload consumes in power. You can only benchmark how long it takes. However, it's a an assumption to say "I'm not using more power than I pay for. OvH pays at least 0,20 €/kWh, so if I pay 50€ per year for a VM there, and half of that is just energy costs, it can't use more than 125 kWh" or so. Do your modelling!

I'd be very surprised if you end up preferring to run your old PC for the kinds of workloads you need.

Marcus Müller
  • 21,602
  • 2
  • 39
  • 54