Last week we made a second appearance on Microsoft’s Channel 9 show “On .NET” to give an update on the progress we have been making with Peachpie compiler. As discussed in the livecast, we were quite interested in examining the performance differences of ASP.NET versus ASP.NET Core.
What is ASP.NET Core?
As explained in a previous blogpost, ASP.NET Core is a lean, modular and composable framework for web and cloud applications, which is completely open-source. It is cross-platform, which means it works on all operating systems. ASP.NET Core apps can either run on .NET Core or on the full .NET framework.
We talked extensively about .NET Core and how Peachpie can compile PHP to it on last week’s “On .NET”:
ASP.NET versus ASP.NET Core
As Bertrand Le Roy points out in the video, it is very interesting to compare the performance of ASP.NET and ASP.NET Core, as the lean, modular version should have very little overhead. The key difference with our performance comparison is the fact that we are benchmarking an ASP.NET test script on the IIS, whereas ASP.NET Core will be tested on the open-source Kestrel web server.
The goal of our analysis is to demonstrate the increased throughput of the Kestrel web server compared to the IIS, which should allow ASP.NET Core to yield a higher performance.
Why not .NET Core?
The question may remain why we didn’t also test .NET versus .NET Core. The reason is that the performance differences cannot be visible until larger applications are tested. For microbenchmarks such as the ones we have been running, .NET Core will not be able to exhibit its full prowess and may even, in some instances, slow the performance down when compared to regular .NET.
It will be interesting to revisit these comparisons once we are able to test a larger application or framework.
Test Methodology
Our hardware consisted of a development machine with the following configuration: Core i7-2600 @ 3.4 GHz, 16GB DDR3 RAM. The OS used for the test was Windows 10 Pro x64. We set up two web servers:
- .NET 4.6 + Microsoft-IIS/10.0 (i.e. IIS Express 10.0)
- NET Core 1.0 + Kestrel (opensource ASP.NET Core Web Server)
The IIS has a default document index.html
, where it simply writes out “Hello World!”, as well as a compiled test.php
script, which writes out the current time using the ‘microtime(true)’ function in a cycle of 1000x (see http://php.net/manual/en/function.microtime.php).
The Kestrel server has the same compiled test.php
script and a fallback RequestDelegate
, which writes out “Hello World!”. This is the biggest difference between the IIS and Kestrel, where we use the RequestDelegate
, which does not affect the file system, instead of the default document.
For our benchmark, we used the standard ab.exe utility (ApacheBench), 4 concurrentthreads, and we ran a workload on the server for 10 seconds. The command used is ‘ab -c4 -t10’, where “-c” specifies the number of concurrent threads, “‘-t” stands for the length of time the test runs. We placed four different addresses into the command and compared the results, whereby the IIS Express 10.0 server ran on port:14082 and the Kestrel server ran on port:5000.
Results
To see the detailed results, please visit the respective Pastebin link provided:
IIS Express 10.0, Default Document
ab -c4 -t10 http://localhost:14082/
http://pastebin.com/ei3ngS7K
IIS Express 10.0, test.php
ab -c4 -t10 http://localhost:14082/test.php
http://pastebin.com/VFGhtnnV
Kestrel & ASP.NET Core, Fallback RequestDelegate
ab -c4 -t10 http://localhost:5000/
http://pastebin.com/Y7X2k6th
Kestrel & ASP.NET Core, test.php
ab -c4 -t10 http://localhost:5000/test.php
http://pastebin.com/e9d1zPAd
The results indicate the following numbers of completed requests:
IIS Express 10.0 (Default Document): 8293
IIS Express 10.0 (test.php): 9783
Kestrel & ASP.NET Core (Fallback RequestDelegate): 43130
Kestrel & ASP.NET Core (test.php): 13152
Therefore, judging strictly by the number of completed requests in the time slot of 10 seconds with 4 concurrent threads, the Kestrel web server with the fallback RequestDelegate
was able to serve the highest number of requests out of the four tested configurations.
Follow our progress on Twitter, our GitHub repository or on Facebook.