notes-computer-programming-webDevelopmentFrameworkPerfNotes

as of Feb 2017

https://www.techempower.com/benchmarks/#section=data-r13&hw=ph&test=query&b=2&s=1&l=435q4b&d=6p considering frameworks in: clojure elixir go java js python ruby scala, on Linux platforms

using 'physical' numbers to start with. when there are multiple criteria, 'mixing and matching' between similar entries is allowed. I round arbitrarily to nice round numbers a lot.

lowest latencies are around 33ms on 'multiple queries' task; lowest max latencies (disregarding all frameworks with errors) are around 100ms. The frameworks with avg latencies <= about 3x of the 33ms (100ms) OR max latency about 3x of 100 (300), and with no errors, and without a stddev more than about 100ms, with duplicates and near-duplicates removed, are:

fasthttp dropwizard servlet-postgres-raw http-kit nodejs bottle-mysql-raw ninja-standalone kami jawn wildfly-ee7 flask revenj.jvm vertx-web-jdbc sinatra-sequel-puma- ringojs compojure-raw echo-prefork grizzly-jersey undertow-jersey-hika web2py-optimized tapestry falcore goji go revel-raw gin asyncio express-mysql activeweb http4s hapi-mysql spring wicket gemini-postgres django-py3 phoenix puma-padrino

lowest latencies are around 0.5ms on 'json serialization' task; lowest max latencies (disregarding all frameworks with errors) are around 20ms. The frameworks with avg latencies <= about 12ms OR max latency about 60, and with no errors, and without a stddev more than about 30ms, with duplicates and near-duplicates removed, are:

echo-prefork colossus revenj.jvm falcon undertow netty vertx rapidoid nodejs fasthttp-mysql-prefo gemini servlet kami wheezy.web jawn express gin grizzly beego falcore bottle jlhttp finatra phoenix finagle fintrospect spray webgo play-scala-anorm play2-java api-hour+aiohttp.web finch spark sinatra-sequel-puma- dropwizard restexpress jooby flask-py3 pyramid-py2 activeweb http4s goji tapestry aleph compojure comsat turbogears http-kit django-py3 luminus ninja-standalone ringojs web2py-optimized puma-padrino cherrypy-py3

single query latency below ~25ms avg latency or max ~150ms, stddev below ~40ms (and no errors; i'll stop writing 'no error' below):

fasthttp-postgresql- nodejs kami vertx-web-jdbc grizzly-jersey bottle-mysql-raw goji echo falcore gin go undertow sinatra-sequel-puma- gemini-mysql http4s express-mysql dropwizard servlet-postgres-raw activeweb phoenix luminus http-kit compojure-raw ringojs flask beego wicket tapestry tornado play2-scala-anorm-li wheezy.web-py3 ninja-standalone django-postgresql pyramid-py3 revel-jet turbogears wildfly-ee7 spark puma-padrino spring pedestal hapi cherrypy unicorn-padrino

fortunes latency below ~25ms avg latency or max ~150ms, stddev below ~40ms:

play2-scala-anorm-li nodejs gemini-mysql grizzly-jersey echo-prefork kami http4s fasthttp-mysql-prefo undertow goji servlet-raw go gin sinatra-sequel-puma- express-mysql revenj.jvm bottle-mysql-raw falcore dropwizard vertx-web-jdbc phoenix tapestry wheezy.web http-kit spring wicket flask pyramid-py3 compojure-raw revel-jet ninja-standalone wildfly-ee7 luminus turbogears flask-py3 django-py3 pedestal puma-padrino hapi-mysql

data updates latency below ~350ms or max ~600ms, stddev < 130ms

fasthttp phoenix nodejs servlet-postgres-raw echo-std go akka-http dropwizard hapi pyramid-py2 http-kit bottle-mysql-raw flask activeweb undertow kami ringojs revel-raw falcore goji gin compojure-raw gemini-mysql jawn django-py3 sinatra-sequel-puma- spring vertx-web-jdbc

plaintext task (using round12, there seemed to be some problems with this task in round13), avg latency <~240ms, or max <~660ms, and stddev <~150ms:

rapidoid sinatra-sequel-puma- jetty-servlet netty ngx_mruby vertx play-scala-anorm finch http4s cherrypy

and plaintext task on round13 again, avg latency <=390ms or max ~4000ms(!), stddev <~520ms

sinatra-sequel-puma- vertx-web wheezy.web spray falcon rapidoid comsat-servlet-under bottle aleph jooby s-server gin colossus netty dropwizard

ones on at least 2 of the above lists:

fasthttp dropwizard servlet http-kit nodejs bottle ninja-standalone kami jawn wildfly-ee7 flask revenj.jvm sinatra ringojs compojure echo undertow web2py-optimized (only on 2 lists) tapestry falcore goji go revel gin express activeweb http4s hapi spring wicket gemini django phoenix puma-padrino colossus falcon netty vertx rapidoid wheezy.web grizzly beego spray play finch spark jooby pyramid aleph turbogears luminus cherrypy pedestal comsat

counts of appearance on the above 6 lists (with both 'plaintext' tasks counting as one list):

dropwizard 6 servlet 6 http-kit 5 nodejs 5 bottle 6 ninja-standalone 4 kami 5 jawn 3 wildfly-ee7 3 flask 4 revenj.jvm 3 sinatra 6 ringojs 4 compojure 5 echo 5 undertow 5 web2py-optimized 2 tapestry 4 falcore 5 goji 5 go 4 revel 4 gin 6 express 4 activeweb 4 http4s 5 hapi 4 spring 4 wicket 3 gemini 5 django 5 phoenix 5 puma-padrino 4 colossus 2 falcon 2 netty 2 vertx 6 rapidoid 2 wheezy.web 4 grizzly 4 beego 2 spray 2 play 4 finch 2 spark 2 jooby 2 pyramid 4 aleph 2 turbogears 3 luminus 3 cherrypy 3 pedestal 2 comsat 2

so the ones with count >= 4:

dropwizard 6 servlet 6 http-kit 5 nodejs 5 bottle 6 ninja-standalone 4 kami 5 flask 4 sinatra 6 ringojs 4 compojure 5 echo 5 undertow 5 tapestry 4 falcore 5 goji 5 go 4 revel 4 gin 6 express 4 activeweb 4 http4s 5 hapi 4 spring 4 gemini 5 django 5 phoenix 5 puma-padrino 4 vertx 6 wheezy.web 4 grizzly 4 play 4 pyramid 4

so the ones with count >= 5:

dropwizard 6 servlet 6 http-kit 5 nodejs 5 bottle 6 kami 5 sinatra 6 compojure 5 echo 5 undertow 5 falcore 5 goji 5 gin 6 http4s 5 gemini 5 django 5 phoenix 5 vertx 6

so let's take a closer look at the >=4s (note: nodejs, servlet, undertow are bare platforms, not frameworks (framework 'none'); for echo, http-kit, revel, i can't find them in the filters box (actually, i later found http-kit under platform 'ring', and the others appeared with the following selections too); in order to get the platforms for all of the frameworks below, i also enabled platforms cowboy, jax-rs, jetty, netty, nio2, none, rack, (ring, for http-kit), ringojs, tornado, uWSGI, vertx; and enabled framework 'none'; note also that framework 'go' is categorized as 'stripped' rather than 'realistic'):

activeweb bottle compojure django dropwizard express falcore flask gemini gin go goji grizzly hapi http4s kami ninja-standalone phoenix play puma-padrino pyramid ringojs sinatra spring tapestry vertx wheezy.web

https://www.techempower.com/benchmarks/#section=data-r13&hw=ph&test=json&b=2&s=1&l=435q4b&p=hm8f04-qmwo39-13bq4f&d=6p&a=2&f=zemy1u-yxwven-suspv3-zih69q-qmajen-hr7qbj-3j

so now let's look at what we have and put some tighter bounds on criteria. Let's start by finding the worst value among >=4s (this is excluding other stuff that got added back in by the Nones, such as comsat-servlet-under, and taking the best among similar ones such as flask-pypy, flask-nginx-uwsgi) (the framework with the worst latency value i'll call the cutoff), and cutting it in half, and adding a little bit:

for 'json serialization' task (note that flask- and bottle- nginx-uwsgi were excluded due to errors) (cutoff hapi) avg latency <~25ms, and max <~600ms, and stddev <=~ 15ms (hmm, mb this should have been 30ms b/c bottle-pypy is 60ms, oh well):

dropwizard servlet ninja-standalone kami sinatra ringojs compojure echo undertow tapestry falcore goji go revel gin express activeweb (barely) http4s spring gemini django (barely) phoenix padrino vertx wheezy.web grizzly play pyramid

for 'single query' task, (excluding outliers flask-pypy and bottle-pypy when computing criteria limits; note that flask- and bottle- nginx-uwsgi were excluded due to errors) (cutoff hapi) avg latency <~25ms and max <400ms and stddev 30ms:

dropwizard servlet nodejs ninja-standalone kami sinatra ringojs compojure (raw only) echo undertow tapestry falcore goji go revel gin express activeweb http4s spring gemini django (barely) phoenix puma-padrino (barely) vertx grizzly pyramid (barely)

for 'multiple queries' task, (excluding outliers flask-pypy and bottle-pypy when computing criteria limits; note that flask- and bottle- nginx-uwsgi were excluded due to errors) (cutoff puma-padrino) avg latency <~130ms and max <460ms and stddev 50ms:

dropwizard servlet http-kit nodejs ninja-standalone kami sinatra ringojs compojure echo undertow tapestry falcore goji go revel gin express activeweb http4s hapi spring gemini phoenix vertx grizzly

for 'fortunes' task (excluding outliers flask-pypy and bottle-pypy when computing criteria limits; note that flask- and bottle- nginx-uwsgi were excluded due to errors) (cutoff ringojs), avg latency <~30ms and max <~400ms and stddev <~25ms:

dropwizard servlet http-kit nodejs bottle ninja-standalone kami sinatra echo undertow tapestry falcore goji go revel gin express http4s spring gemini django phoenix vertx wheezy.web grizzly play pyramid

for 'data updates' task (excluding outliers flask-pypy and bottle-pypy when computing criteria limits; note that flask- and bottle- nginx-uwsgi were excluded due to errors; note that there was another flask-pypy that was NOT excluded, but it failed the criteria (max too high); also tapestry,grizzly were not found in the list; also ninja-standalone, http4s excluded due to errors) (cutoff express-mysql), avg latency <~400ms and max <~1600ms and stddev <~180ms:

dropwizard servlet http-kit nodejs kami sinatra ringojs echo undertow falcore goji go revel gin activeweb hapi spring gemini django phoenix vertx play pyramid

for 'plaintext' task (round 13) (cutoff revel), avg latency <~2600ms and (no max latency limit, since so many frameworks which were fast elsewhere are getting near the max, 8000ms, here) and stddev <~1500ms (many removed for errors) (i dont trust this one; so many otherwise fast frameworks' 'max' is near the max recorded value of 8000, and some otherwise fast frameworks are among the slowest here; also some frameworks were missing, eg flask):

dropwizard servlet bottle ninja-standalone kami sinatra compojure echo undertow falcore go gin express activeweb django phoenix puma-padrino vertx wheezy.web grizzly pyramid

task (round 12) (django, phoenix not found) (cutoff gemini), avg latency <~3300ms and max latency <9000 and stddev <~4600ms (i dont trust this one; so many otherwise fast frameworks' 'max' have huge max latency):

dropwizard ninja-standalone sinatra echo undertow (only comsat-servlet-undertow) falcore go gin puma-padrino vertx pyramid

so the new counts are:

dropwizard 6 servlet 6 http-kit 3 nodejs 4 bottle 2 (often excluded due to errors, though) ninja-standalone 6 kami 6 flask (often excluded due to errors, though) sinatra 6 ringojs 4 compojure 4 echo 6 undertow 6 tapestry 4 falcore 6 goji 5 go 6 revel 5 gin 6 express 5 activeweb 5 http4s 4 hapi 2 spring 5 gemini 5 django 5 phoenix 6 puma-padrino 4 vertx 6 wheezy.web 3 grizzly 5 play 3 pyramid 5

so, removing all but 5s and 6s and bottle and flask (which were often excluded) we have:

activeweb bottle django dropwizard echo express falcore flask gemini gin go goji grizzly kami ninja-standalone phoenix pyramid revel servlet sinatra spring undertow vertx

Now let's do another round of cuts, but this time using "cloud" instead of "physical".

for 'json serialization' task, (vertx not found) (cutoff spring) avg latency <~40ms and max <500ms and stddev 55ms:

bottle django echo express falcore flask gemini gin go goji grizzly kami phoenix pyramid revel servlet undertow

for 'single query' task, (vertx not found) (cutoff pyramid-py3) avg latency <~40ms and max <500ms and stddev 27ms:

echo express falcore flask gemini gin go goji kami ninja-standalone phoenix revel servlet sinatra spring undertow

for 'multiple queries' task, (vertx not found) (cutoff pyramid-py3) avg latency <~500ms and max <1250ms and stddev 140ms:

activeweb bottle dropwizard echo falcore flask gemini gin go goji kami ninja-standalone phoenix revel servlet sinatra undertow

for 'fortunes' task, (vertx not found, a bottle was discarded for errors) (cutoff django-postgresql) avg latency <~60ms and max <480ms and stddev 60ms:

dropwizard echo express falcore flask gemini gin go goji grizzly kami phoenix revel sinatra spring undertow

for 'data updates' task, (vertx, grizzly not found, a bottle, flask, ninja were discarded for errors) (cutoff django-postgresql) avg latency <~1050ms and max <2900ms and stddev 400ms:

activeweb bottle dropwizard echo falcore gemini gin go goji kami phoenix revel servlet sinatra undertow

for 'plaintext' task, (vertx not found, many were discarded for errors) (cutoff revel) avg latency <~1650ms and no max and stddev 1200ms:

activeweb echo express gemini gin go grizzly kami pyramid servlet undertow

'plaintext' task (round 12) is not available in the cloud, so we'll do physical again. (django, grizzly, kami, phoenix, revel not found) (cutoff servlet) avg latency <~1650ms and max 8550ms and stddev 3500ms:

dropwizard echo express falcore flask gin go sinatra spring vertx

so the new counts are:

activeweb 3 bottle 3 django 1 dropwizard 4 echo 6 express 4 falcore 6 flask 5 gemini 6 gin 6 go 6 goji 5 grizzly 3 kami 6 ninja-standalone 2 phoenix 5 pyramid 2 revel 5 servlet 5 sinatra 5 spring 3 undertow 6 vertx 1

4s and above:

dropwizard echo (platform: None) express falcore flask gemini gin go goji kami phoenix revel (platform: None) servlet (platform: servlet) sinatra undertow (platform: undertow)

selected platforms still needed: cowboy jax-rs none rack servlet undertow vertx

if only looking at frameworks, not platforms with no framework, then can deselect framework None, atn platform undertow (leaving platforms cowboy, jax-rs, none, servlet, rack, vertx). The remaining items are:

dropwizard express falcore flask gemini gin go goji kami phoenix sinatra

This is small enough to eyeball. Further observations on both physical and cloud latencies:

JSON serialization (disable server 'thin' for a better graph b/c sinatra-thin is bad): 3 tiers on cloud: fast: falcore, gin, go, goji, kami (avg <5ms, stddev<7ms, max<250ms) medium: gemini, phoenix, express (avg <~20ms, stddev<15ms, max<500ms) slow: dropwizard, flask, sinatra (avg <35ms, stddev <60ms, max<1000ms) On physical, everything is similar except sinatra and flask are worse.

single query (disable server 'thin' for a better graph b/c sinatra-thin is bad): no sharp 'tiers', but ordering is similar to JSON serialization, except 'express' is now comparable to dropwizard and sinatra, on cloud but not physical. In terms of stddev and max, everyone is similar except phoenix is a little better. On physical, flask and sinatra are worse than dropwizard.

multiple queries: everyone is similar except that express, dropwizard, gemini are moderate, and flask, sinatra are a worse (but sometimes one or the other of sinatra-puma, sinatra-sequel-puma do okay).

fortunes: gin,kami,go,goji,falcore are great, and phoenix too although it's a little slower. express is good, except that on cloud its slow on average but still great on tails. dropwizard, gemini are moderate. flask, sinatra, are bad (but sometimes one or the other of sinatra-puma, sinatra-sequel-puma do okay).

data updates: phoenix is better than the rest. dropwizard postgres is good (dropwizard mysql did poorly on cloud). sinatra-puma and mb sinatra-sequel-puma may be okay, other sinatras not. express is not good. flask is okay or bad on physical vs cloud. others are similar to each other.

plaintext (round 13): sds are huge and maxs are near the max. Lots of errors. i bet the avgs aren't statistically significant. i think everything looks similar.

to summarize those observations, we have 3 tiers: fast: falcore, gin, go, goji, kami medium: phoenix (may be a little better than the other mediums), gemini, express, dropwizard slow: sinatra, flask

So let's remove sinatra and flask. Remaining are:

dropwizard (Java) express (JS) falcore (Go) gemini (Java) gin (Go) go (Go) goji (Go) kami (Go) phoenix (Elixir)

are we missing something in other languages/platforms? enable all languages and platforms, disable databases other than mysql and postgres, capture latency lists up to the last item in the chosen list (the previous list; the one starting with dropwizard and ending with phoenix). I'm not being very careful here.

take intersection of all tasks and cloud/physical.

intersection of single query, multiple queries, fortune, data updates (intersection of physical and cloud): ulib (C++) urweb (Ur) cpoll_cppsp (C++) bottle (Python) sinatra (Ruby) redstone (Dart) silicon (C++)

We can eliminate the C++, Ur, and Dart ones, leaving just bottle and sinatra. A quick glance at these shows that yes, they are significantly worse than the others.

the 'go' framework is categorized as a 'stripped' implementation, so eliminate it.

So the final list is:

dropwizard (Java, full ORM) express (JS, full ORM) falcore (Go, raw ORM) gemini (Java, micro ORM) gin (Go, raw ORM) goji (Go, raw ORM) kami (Go, raw ORM) phoenix (Elixir, full ORM)

And here's the stats:

JSON serialization, physical throughput (in kiloresponses per second): 80-277. Worst are Goji, dropwizard, phoenix. Express, gemini not represented. JSON serialization, cloud throughput (in kiloresponses per second): 25-90. Worst are dropwizard, phoenix. Express, gemini not represented. JSON serialization, physical latency: avg 1-5ms, stddev 2-8ms, max 20-215ms. Worst tail are dropwizard. Express, gemini not represented. JSON serialization, cloud latency: 3-35ms, stddev 2-54ms, max 30-615ms. Worst tails are dropwizard. Express, gemini not represented. single query, physical throughput: 35-125. Worst is phoenix, dropwizard, express. single query, cloud throughput: 7-30. Worst is express, dropwizard, then phoenix. single query, physical latency: 2-25, std 2-50, max 20-300ms. Worst tails are gemini. single query, cloud latency: 12-40, std 8-35, max 115-650ms. (no worst tails) multiple queries, physical throughput: 2-8. Worst is phoenix. multiple queries, cloud throughput: 0.5-1.5. Worst is express. multiple queries, physical latency: 33-130, std 5-80, max 150-650ms. Worst tails are gemini. multiple queries, cloud latency: 170-530, std 50-90, max 500-1000ms. Worst tails are dropwizard, express, gemini. fortunes, physical throughput: 30-110. Best is gemini. fortunes, cloud throughput: 6-22. Worst are express, dropwizard. fortunes, physical latency: 4-15, std 5-30, max 25-300. Worst tails are dropwizard, gemini. fortunes, cloud latency: 13-45, std 4-40, max 80-550ms. Worst tails are dropwizard, gemini. data updates, physical throughput: .3-2. Worst is express, best is phoenix. data updates, cloud throughput: .15-1. Worst is express, best is phoenix. data updates, physical latency: 130-660, std 6-290, max 170-1510. Worst tails are express, gemini. data updates, cloud latency: 250-1550, std 45-pg0, max 450-3000. Worst tails are kami, goji, dropwizard, then express.

Counts of 'worsts': worst tails: dropwizard 6, gemini 6 to 8, express 3 to 5, kami 1, goji 1. worst throughput: goji 1, dropwizard 5, phoenix 5, express 6 to 8, gemini 0 to 2.

so, maybe both tails and throughput could be improved by dropping dropwizard. Dropping gemini could also improve tails, maybe at the expense of throughput at least in the Fortunes task. This would be dropping both of the Java ones, leaving:

express (JS, full ORM) falcore (Go, raw ORM) gin (Go, raw ORM) goji (Go, raw ORM) kami (Go, raw ORM) phoenix (Elixir, full ORM)

clicking through and eyeballing these, the average latency of express is almost always the worst in 'cloud', and worst or near worst in 'physical' (sometimes phoenix is worst in physical). However, the latency tails (stddev and max) aren't that much worse for express than for the others (around 2x worse). Express also tends to be the worst or second worst in terms of throughput.

If you get rid of express, then in cloud, phoenix has 4x higher max latency (170 vs 40) for JSON serialization), 4x lower for single query (115 vs 475), 1.5x lower for multiple queries (470 vs 780), 2x higher for fortunes (180 vs 81), and 6x lower for data updates (450 vs 2900). Imo, without knowing what you'll be using it for, this looks like either phoenix or the others are a good choice. Phoenix might have the edge, since is seems to be better when the numbers get bigger; but i'm not sure if this matters. Also, in 'physical' rather than cloud, phoenix's max latencies are even better, comparatively. The tradeoff may be that Phoenix appears to be worse at 'throughput' in everything except for data updates, up to a factor of 3.

looking at google, and looking at github watches/stars/forks, within the Go frameworks/libraries, gin > gorilla > goji > falcore > kami.

Some sites comparing them are:

Note that many people mention gin (which bills itself as a faster Martini) and gorilla and beego and revel and goji; and even more people say that if you want a minimalist framework, it's easy to just go without and use Golang's stdlib (but possibly with gorilla/mux), and if you want a full-service framework, there's nothing in Go and you should use Rails or Django.

So, i'm going to drop falcore and kami and say that the list of really low-latency web frameworks (in languages i'm most leaning towards) is:

express (JS, full ORM) gin (Go, raw ORM) goji (Go, raw ORM) phoenix (Elixir, full ORM)

And now i'm also going to go back and compare these to the others i'm considering, namely bottle, flask, django, sinatra, grape. And since goji is less popular and we already have two other languages, i'll cut it.

first i'll focus on max latency. in the following, when available, i generally use 'flask' (raw DB) for flask, sinatra-unicorn for sinatra, django, bottle, unicorn-grape. Note that 'flask' (ORM) performs much worse.

json-serialization: bottle: better than gin, phoenix in cloud, 2x worse in physical. flask-py3: better than gin/phoenix for cloud, 3x worse for physical. sinatra-unicorn: same as gin in cloud, 3x worse in physical. unicorn-grape: same as gin.

single query: in cloud, gin is worse than pretty much everybody, and express is about the same as everybody. in physical, compared to gin, flask is 6x worse, django is 9x worse, bottle is 9x worse, sinatra and grape are 5x worse.

multiple queries: in cloud, compared to gin, flask is the same (with raw DB) or 2x worse (with ORM), django is 2x worse, bottle is 3x worse, sinatra and grape are about the same. in physical, compared to phoenix, flask is 1.5x worse, django is 4x worse, bottle is 5.5x worse, sinatra is 5x worse, grape is 5x worse.

fortunes: in cloud, compared to phoenix, flask is about the same, bottle is 2x worse, django is 4x worse, sinatra is about the same (grape is not listed). in physical, compared to gin, flask is 6x wose, bottle is 6x worse, django is 9x worse, sinatra is 5x worse.

data updates: in cloud, compared to gin, flask is about the same, bottle is 2x worse, django is 2.5x worse, grape is 2x worse, sinatra is about the same. in physical, compared to gin, flask and django are about the same, bottle is 6x worse, sinatra is 3x worse, grape is 3.5x worse.

now let's look at stddev.

json-serialization: in cloud, compared to phoenix, sinatra-unicorn is better, flask is 2x worse, bottle is better, grape is the same, django is 2x worse. in physical, compared to gin, bottle is 2x worse, flask is 2x worse, sinatra is 2x worse, django is 6x worse, grape is 2x worse.

single query: in cloud, compared to gin, flask is the same, django is better, bottle is the same, grape is better, sinatra is better. in physical, compared to gin, flask is 2x worse, django is 3x worse, bottle is 3x worse, grape is 2x worse, sinatra is 2x worse.

multiple queries: in cloud, compared to gin, flask is 2x worse, django is 4x worse, bottle is 5x worse, grape is 2x worse, sinatra is 2x worse. in physical, compared to gin, flask is the same, django is 4x worse, bottle is 5x worse, sinatra is 4x worse, grape is 4x worse.

fortunes: in cloud, compared to phoenix, flask is 1.5x worse, bottle is 3x worse, django is 5x worse, sinatra is the same (grape is not listed). in physical, compared to gin, flask is 2x wose, bottle is 3x worse, django is 5x worse, sinatra is 2x worse.

data updates: in cloud, compared to gin, flask is the same, bottle is 2x worse, django is 3x worse, sinatra is the same, grape is 2x worse. in physical, compared to gin, flask is 2x worse, django is 2x worse, bottle is 7x worse, sinatra is 7x worse, grape is 8x worse.

now let's look at avg latency.

json-serialization: in cloud, compared to phoenix, bottle is better, django is 2x worse, flask is 2x worse, sinatra is 3x worse, grape is 4x worse. in physical, compared to phoenix, bottle is better, flask is 2x worse, django is 4x worse, sinatra is 10x worse, grape is 17x worse.

single query: in cloud, compared to phoenix, flask is 2x worse, django is 3x worse, unicorn is 4x worse, bottle is 5x worse, grape is 6x worse. in physical, compared to phoenix, flask is the same, django is 2x worse, bottle is 2x worse, sinatra is 7x worse, grape is 11x worse.

multiple queries: in cloud, compared to phoenix, flask is 1.5x worse, django is 3x worse, sinatra is 3x worse, grape is 4x worse, bottle is 5x worse. in physical, compared to phoenix, flask is better (by 3x!), django is the same, bottle is the same, sinatra is 4x worse, grape is 4x worse.

fortunes: in cloud, compared to phoenix, flask is 2x worse, bottle is 2x worse, django is 2.5x worse, sinatra is 2.5x worse (grape is not listed). in physical, compared to phoenix, flask is 1.5x worse, bottle is 2x worse, django is 3x worse, sinatra is 10x worse.

data updates: in cloud, compared to gin, sinatra is the same, flask is 2x worse, bottle is 3x worse, django is 4x worse, grape is 5x worse. in physical, compared to gin, flask is better, django is the same, bottle is the 1.5x worse, sinatra is 7x worse, grape is 7x worse.

in the above, focusing on the >=5x worses (mixing cloud and physical):

so, at least for API backends, we should probably eliminate grape, sinatra, bottle, django from consideration. (note: it's not clear that flask is any better, since i've been getting numbers from the 'good' flask, which uses no ORM; flask with ORM does worse; and i've been using sinatra numbers from sinatra-unicorn; sinatra-sequel-puma usually does better). In fact, eyeballing the charts for flask (no ORM) vs flask (ORM) vs django and sinatra, with ORM flask is about the same as sinatra, and worse than django. Sinatra is well known but maybe not as much as flask (since there are more python programmers than ruby programmers).

I like grape, but it's not well known, and it appears to have much more latency than the alternatives in some cases.

Bottle is faster, but i think it's not well known enough (any more!).

in terms of general popularity,

http://hotframeworks.com/#top-frameworks

ranks:

django > express > flask > sinatra > phoenix > bottle > gin (grape and goji are not even listed).

which mostly accords with my intuition.

So back to just:

express (JS, full ORM) gin (Go, raw ORM) phoenix (Elixir, full ORM)

---

note that https://github.com/mroth/phoenix-showdown found it necessary to use express-cluster, not just express, in order for it to perform well

---

are some of these supported or not supported on heroku?

https://www.heroku.com/languages says it supports Node and Go (and Python) (amongst others). But not Elixir. There are hacks to get Heroku to support any language, though.

Still.. i think that decides it for me. So it's between express and Go (maybe Gin, maybe just Gorilla).

---

so, between express and go, eyeballing the numbers at https://www.techempower.com/benchmarks/ , gin appears to have both lower latency and better performance in almost every test.

The benefit of express is greater popularity and maturity, though, so it's not completely ruled out.

---

https://www.google.com/search?q=express+gin+phoenix&ie=utf-8&oe=utf-8

https://www.google.com/search?q=express+phoenix+gin+bottle+flask+sinatra&ie=utf-8&oe=utf-8 https://www.google.com/search?q=express+phoenix+flask+sinatra https://www.reddit.com/r/elixir/comments/3q5bus/is_phoenix_more_framework_or_library_more_rails/

---

https://blog.jaredfriedman.com/2015/09/15/why-i-wouldnt-use-rails-for-a-new-company/ ---

a person at https://www.quora.com/As-a-Web-developer-coming-from-Python-Django-background-should-learn-Golang-or-Elixir-Phoenix-Why

suggests looking at https://github.com/mattermost/platform as an example of an API server in Golang. https://gowalker.org/github.com/mattermost/platform/api?imports suggests that they just use net/http and gorilla/mux.

they also say that "Go is the best fit for API services and command line tool development".

they say " Go for API server, cmd line tools and DevOps? scripts. Elixir for web development. Vue.js for frontend development (in case you want to know) "

which is similar to what i have been thinking (although, is Elixir mature enough to replace Django? probably not yet; also, i am still considering React in place of Vue)

---

http://www.akitaonrails.com/2015/12/03/the-obligatory-flame-war-phoenix-vs-node-js

http://blog.carbonfive.com/2016/04/19/elixir-and-phoenix-the-future-of-web-apis-and-apps/

"Can I use the services I know and love, like GitHub?, Heroku, CircleCI?, Code Climate, New Relic, etc?

Mostly, with a few caveats. We’re using GitHub?, Heroku and CircleCI? and they all work great. CircleCI? needs a couple of additions to the circle.yml, but it’s nothing unusual. New Relic doesn’t support Elixir/Phoenix yet, but there’s the Erlang Observer and Exometer (see measuring your phoenix app). Code Climate doesn’t work out of the box, but if you’re adventurous, there are some projects you can probably get working. "

---

http://www.todobackend.com/

---

https://github.com/mroth/phoenix-showdown/blob/master/RESULTS_v3.md

https://github.com/mroth/phoenix-showdown

https://medium.com/@tschundeee/express-vs-flask-vs-go-acc0879c2122

http://blog.digg.com/post/141552444676/making-the-switch-from-nodejs-to-golang

https://www.reddit.com/r/golang/comments/1ye3z6/go_vs_nodejs_for_servers/

https://www.quora.com/It-seems-like-Go-is-better-than-Node-js-in-terms-of-performance-not-that-big-a-margin-and-syntax-so-why-is-Node-js-way-more-popular-than-Go

---

https://news.ycombinator.com/item?id=8672234

---

https://blog.acolyer.org/2018/06/28/how-_not_-to-structure-your-database-backed-web-applications-a-study-of-performance-bugs-in-the-wild/

---

2020:

https://elixirforum.com/t/django-vs-phoenix/22252/88

"running identical Rails and Phoenix apps (backed by full nights of setting up telemetry and measurements together with sysadmins). In both apps (pretty normal commercial CRUD + API + some bells-and-whistles-attached apps) the average latency was 9x - 12x less in Phoenix, and it also used 3x - 7x less RAM during most loads, extreme included (anywhere from 100 to 7000 users a minute)."

---

https://news.ycombinator.com/item?id=14848670

udfalkso on July 25, 2017 [-]

I recently rewrote an (admittedly old and stale) django app from the ground up with Elixir/Phoenix. The website gets quite a bit of traffic.

I went from an elaborate multi-tier caching setup, with varnish and memcache, to a ZERO caching setup. The amount of complexity reduced by doing this is huge.

The Elixir app hits the postgres db for nearly every request and I'm getting average response times around 45ms. Quite a bit of that is database wait time. It's super stable and efficient. And I'm running it on a dirt cheap, tiny node in the cloud, where before I needed two small EC2 instances to keep up with peak loads.

Also, now that I've gotten the hang of it I think it's actually more efficient with regards to development than either Django or Rails.

Here are the google crawler response times directly before/after the switch to Phoenix: http://imgur.com/a/FASyJ

pjungwir on July 25, 2017 [-]

You and blatyo are convincing me. :-) I really appreciate your openness in sharing these stats. I don't need to serve 2 million concurrent websockets. But I need to render HTML/JSON based on some database queries, under moderate traffic, and I'd like to get it under 200ms without trying so hard.

I mostly agree with DHH that response time is dominated by database queries and Rails is "fast enough". Most problems you can fix by improving your query patterns or doing some SQL tuning. And yet . . . Rails is still awfully slow! Most companies don't have time for Russian-doll caching, let alone making the designers consider that at design time. I would love to use something as effortless as Rails that still gave me snappy performance well into the app's maturity.

I'm fairly confident Elixir can do that, but it's hard to look at these benchmarks and come on HN and hear the Elixir folks saying, "Just trust me." There is a strong temptation to walk away thinking it just doesn't live up to the hype. So having some hard numbers is a great reassurance. Thank you!

passer-by-123 on July 25, 2017 [-]

The performance aspect also makes a huge difference in development. The application boots fast and the live reloading experience is rewarding. Having a 100 test cases that hit your endpoint+database running in less than a second is pure joy.

micmus on July 25, 2017 [-]

I can't stress how important that is. We have about 1.5k tests in our Phoenix app and they almost all hit the database - they run in 30 seconds. I compare this to a Rails app where 1k tests run 8 minutes. Having a fast test suite makes development so much easier and makes you rely more on tests.

---

top web frameworks according to https://stackshare.io/frameworks (manually filtering to web frameworks; clientside-ish frameworks eg angular, vue, react excluded)

2020/05 ('2005')

Node.js Django Laravel Rails spring boot symfony meteor codeigniter spring next.js django REST yii play asp.net core nestjs phoenix cakephp grails spring mean dropwizard tornado spring mvc zend phalcon vert.x nette jhipster adonisjs twig vaadin php mvc spring batch vapor gin gonic rocket django channels io.js kohana mojolicious

https://stackshare.io/index/languages-and-frameworks

asp.net django laravel rails expressjs spring boot flask ionic? symfony codeigniter spring

https://stackshare.io/top-tools/languages-and-frameworks/java

spring boot spring play

https://stackshare.io/index/frameworks

asp.net django laravel rails spring boot symfony codeigniter spring meteor next.js django rest play asp.net core yii phoenix cakephp

https://stackshare.io/microframeworks

expressjs flask django rest sinatra hapi koa lumen slim sails.js

https://stackshare.io/top-tools/languages-and-frameworks/python django flask django rest tornado falcon

https://stackshare.io/top-tools/languages-and-frameworks/go

revel martini buffalo

https://stackshare.io/top-tools/languages-and-frameworks/javascript expressjs koa

https://stackshare.io/top-tools/languages-and-frameworks/php laravel symfony lumen

https://stackshare.io/tools/top

django spring boot flask laravel expressjs asp.net symfony spring django rest

https://dzone.com/articles/top-5-java-frameworks-for-web-application-developm

spring mvc jsf vaadin gwt play 2 struts 2

https://dzone.com/articles/most-popular-java-web-frameworks

The top three are:

    Spring
    JSF
    GWT

Other notable Java Web Frameworks:

    Play!
    Struts
    Vaadin
    Grails

https://stackify.com/10-of-the-most-popular-java-frameworks-of-2020/

spring hibernate jsf gwt struts

https://www.mindinventory.com/blog/top-web-frameworks-for-development-golang/

gin beego iris echo revel martini buffalo

https://deepsource.io/blog/go-web-frameworks/ gin beego echo go kit fast http mux (gorillia) httprouter

https://github.com/mingrammer/go-web-framework-stars gin beego echo kit fasthttp mux revel httprouter martini

https://www.slant.co/topics/1412/~best-web-frameworks-for-go gin-gonic revel echo beego martini

---

notes on round 18 techempower benchmarks:

Single query Multiple queries Fortunes Data updates (actually let's look at all of them a little bit)

maximum latency

C#: asp.netcore go: gin beego martini php: laravel symfony codeigniter lumen yii2 zend python: django flask falcon java: spring play2 vert.x dropwizard js: express koa hapi ruby: sinatra grape elixir: phoenix

no mongodb. no platform (only fullstack and micro)

generally i only say something did poorly if all instances of it did poorly, unless i think some instances are irrelevant in which case i noted that here:

https://www.reddit.com/r/dotnet/comments/b9fx16/whats_is_aspcoremw/

we dont want aspcore mw, we want mvc

JSON serialization, physical, max latency, did very poorly (>190ms): falcon spring

JSON serialization, cloud, max latency, did very poorly (>300ms): phoenix hapi

Single query, physical, max latency, did very poorly (>600ms): codeigniter symfony

Single query, cloud, max latency, did very poorly (>250ms): symfony codeigniter (spring without webflux or without alternate db connection?) dropwizard

Multiple queries, physical, max latency, did very poorly (>1000): aspcore zend django

Multiple queries, cloud, max latency, did very poorly (>2000): symfony codeigniter django yii gin zend

Fortunes, physical, max latency, did very poorly (>200ms): codeigniter gin

Fortunes, cloud, max latency, did very poorly (>200ms): codeigniter flask hapi dropwizard symfony

Data updates, physical, max latency, did very poorly (>2000ms): symfony django gin

Data updates, cloud, max latency, did very poorly (>5000ms): symfony django

Plaintext, physical, max latency, did very poorly (>1100ms): yii hapi

Plaintext, cloud, max latency, did very poorly (>2000ms): yii spring (but only one variant included)

so eliminate everything that appears on more than one of those lists: codeigniter symfony dropwizard zend django gin (surprisingly) yii hapi -- i'm going to not remove spring yet because where it came up (json serialization, plaintext), it had less variants than usual, also i care about those scenarios the least

remaining to be assessed:

C#: asp.netcore go: beego martini php: laravel lumen python: flask falcon java: spring play2 vert.x js: express koa ruby: sinatra grape elixir: phoenix

JSON serialization, physical, max latency, did very poorly (>150ms): play2 falcon spring

JSON serialization, cloud, max latency, did very poorly (>100ms): spring phoenix

Single query, physical, max latency, did very poorly (>90ms): lumen

Single query, cloud, max latency, did very poorly (>150ms): phoenix

Multiple queries, physical, max latency, did very poorly (>400ms): asp

Multiple queries, cloud, max latency, did very poorly (>1400ms): flask

Fortunes, physical, max latency, did very poorly (>50ms): koa spring

Fortunes, cloud, max latency, did very poorly (>130ms): flask phoenix

Data updates, physical, max latency, did very poorly (>1000ms): grape

Data updates, cloud, max latency, did very poorly (>4000ms): grape

Plaintext, physical, max latency, did very poorly (>500ms): martini spring

Plaintext, cloud, max latency, did very poorly (>300ms): laravel spring

ones appearing on 2 of those lists: spring phoenix flask grape

with variants over all appearances: spring (1 physical + 1 cloud + 4 physical + 1 physical + 1 cloud) phoenix (1 cloud + 1 cloud + 1 cloud) flask (5 cloud + 5 cloud) grape (2 physical + 2 cloud, but both in data updates scenario)

i'll get rid of spring, and i'll save phoenix and grape because they had either (few variants over all appearances and only appear in cloud), or (all appeared in one scenario). Flask is marginal so i'll keep it.

remaining to be assessed:

C#: asp.netcore go: beego martini php: laravel lumen python: flask falcon java: play2 vert.x js: express koa ruby: sinatra grape elixir: phoenix

now let's look at avg+1stddev latency

JSON serialization, physical, avg+1stddev latency, did very poorly (>5ms): flask 5.2 play2 6.5 lumen 5.1

JSON serialization, cloud, avg+1stddev latency, did very poorly (>10ms): laravel 11.6 phoenix 39

Single query, physical, avg+1stddev latency, did very poorly (>10ms): koa 10.9

Single query, cloud, avg+1stddev latency, did very poorly (>20s): phoenix 32.3

Multiple queries, physical, avg+1stddev latency, did very poorly (>250ms): asp 260

Multiple queries, cloud, avg+1stddev latency, did very poorly (>1000ms): lumen 1110 laravel 1160

Fortunes, physical, avg+1stddev latency, did very poorly (>10ms): koa 12

Fortunes, cloud, avg+1stddev latency, did very poorly (>20ms): laravel 21.2 phoenix 37.4 flask 44.7

Data updates, physical, avg+1stddev latency, did very poorly (>500ms): grape 1000

Data updates, cloud, avg+1stddev latency, did poorly (>2000ms): lumen 2300 laravel 2500 koa 2500 grape 3000

(note: flask 1300 phoenix 1140 sinatra 1800 )

Plaintext, physical, avg+1stddev latency, did very poorly (>250ms): martini 285

Plaintext, cloud, avg+1stddev latency, did very poorly (>500ms): grape 535

ones appearing on 2 of those lists: flask lumen laravel koa phoenix grape

with variants over all appearances: flask 3p 5c lumen 2p 2c 2c laravel 2c 2c 2c 2c koa 2p 2p 2c phoenix 1c 1c 1c grape 2c 2c 2c

eliminate lumen, laravel, koa

forgive flask, phoenix, grape because they have few categories appearing in, or few appearances and all cloud

remaining to be assessed:

C#: asp.netcore go: beego martini python: flask falcon java: play2 vert.x js: express ruby: sinatra grape elixir: phoenix

JSON serialization, physical, responsesPerSec, did very poorly (<30000): grape

JSON serialization, cloud, responsesPerSec, did very poorly (<10000): grape

Single query, physical, responsesPerSec, did very poorly (<11000): grape

Single query, cloud, responsesPerSec, did very poorly (<5000): sinatra grape

Multiple queries, physical, responsesPerSec, did very poorly (<2500): grape

Multiple queries, cloud, responsesPerSec, did very poorly (<800): sinatra grape

Fortunes, physical, responsesPerSec, did very poorly (<20000): sinatra

Fortunes, cloud, responsesPerSec, did very poorly (<4000): sinatra

Data updates, physical, responsesPerSec, did very poorly (<1000): grape

Data updates, cloud, responsesPerSec, did very poorly (<200): sinatra grape

Plaintext, physical, responsesPerSec, did very poorly (<30000): grape

Plaintext, cloud, responsesPerSec, did very poorly (<10000): grape

so eliminate sinatra, grape

remaining to be assessed:

C#: asp.netcore go: beego martini python: flask falcon java: play2 vert.x js: express elixir: phoenix

JSON serialization, physical, latencyStddev, did poorly (>4ms): play2

JSON serialization, cloud, latencyStddev, did poorly (>4ms): phoenix

Single query, physical, latencyStddev, did poorly (>2ms): play2

Single query, cloud, latencyStddev, did poorly (>5ms): phoenix 7

Multiple queries, physical, latencyStddev, did poorly (>80ms): asp

Multiple queries, cloud, latencyStddev, did poorly (>100ms): flask

Fortunes, physical, latencyStddev, did poorly (>2ms): play2

Fortunes, cloud, latencyStddev, did poorly (>6ms): phoenix flask

Data updates, physical, latencyStddev, did poorly (>50ms): asp

Data updates, cloud, latencyStddev, did poorly (>90ms): martini phoenix flask

Plaintext, physical, latencyStddev, did poorly (>100ms): martini asp

Plaintext, cloud, latencyStddev, did poorly (>100ms): martini flask

ones in 3 or more of the above, disregarding categories where the cutoff was in the single digit ms: asp flask martini note that beego either isnt present or didnt complete many benchmarks, so eliminate it

remaining to be assessed:

C#: go: python: falcon java: play2 vert.x js: express elixir: phoenix

i dont care so much about latencyAvg so i wont try so hard to knock anyone out:

JSON serialization, physical, latencyAvg, did poorly (>2ms):

JSON serialization, cloud, latencyAvg, did poorly (>5ms): phoenix

Single query, physical, latencyAvg, did poorly (>4ms):

Single query, cloud, latencyAvg, did poorly (>15ms): phoenix

Multiple queries, physical, latencyAvg, did poorly (>150ms):

Multiple queries, cloud, latencyAvg, did poorly (>900ms):

Fortunes, physical, latencyAvg, did poorly (>4ms):

Fortunes, cloud, latencyAvg, did poorly (>20ms): phoenix

Data updates, physical, latencyAvg, did poorly (>400ms):

Data updates, cloud, latencyAvg, did poorly (>1100ms):

Plaintext, physical, latencyAvg, did poorly (>30ms):

Plaintext, cloud, latencyAvg, did poorly (>100ms):

by rights phoenix should be knocked out here, but i'll leave it in because the pickings are getting slim

ok lets go back and look at the worst performer out of these in each category (still grouping together variants and taking the best) (after the semicolon is presented the limit found above):

latency max, ms: JSON serialization, physical: falcon 171 ; 150 JSON serialization, cloud: phoenix 367 ; 100 Single query, physical: play2 48 ; 90 Single query, cloud: phoenix 158 ; 150 Multiple queries, physical: phoenix 227 ; 400 Multiple queries, cloud: express 959 ; 1400 Fortunes, physical: play2 45 ; 50 Fortunes, cloud: phoenix 174 ; 130 Data updates, physical: express 445 ; 1000 Data updates, cloud: phoenix 1620 ; 4000 Plaintext, physical: phoenix 460 ; 500 Plaintext, cloud: phoenix 886 ; 300

note: phoenix is the most commonly appearing framework here

latency stddev, ms: JSON serialization, physical: play2 4.8 ; 4 JSON serialization, cloud: phoenix 13 ; 4 Single query, physical: play2 2.2 ; 2 Single query, cloud: phoenix 7 ; 5 Multiple queries, physical: express 12.7 ; 80 Multiple queries, cloud: phoenix 78 ; 100 Fortunes, physical: play2 2.1 ; 2 Fortunes, cloud: phoenix 7 ; 6 Data updates, physical: express 14 ; 50 Data updates, cloud: phoenix 136 ; 90 Plaintext, physical: play2 23 ; 100 Plaintext, cloud: express 85 ; 100

note: phoenix is the most commonly appearing framework here

latency avg+stddev, ms: JSON serialization, physical: play2 6.5 ; 5 JSON serialization, cloud: phoenix 39 ; 10 Single query, physical: play2 5.2 ; 10 Single query, cloud: phoenix 33 ; 20 Multiple queries, physical: phoenix 135 ; 250 Multiple queries, cloud: play2 561 ; 1000 Fortunes, physical: play2 4.4 ; 10 Fortunes, cloud: phoenix 37 ; 20 Data updates, physical: express 390 ; 500 Data updates, cloud: phoenix 1146 ; 2000 Plaintext, physical: play2 44 ; 250 Plaintext, cloud: express 244 ; 500

note: phoenix and play2 are the most commonly appearing frameworks here

latency avg, ms: JSON serialization, physical: phoenix 1.7; 2 JSON serialization, cloud: phoenix 26; 5 Single query, physical: play2 2.5; 4 Single query, cloud: phoenix 26; 15 Multiple queries, physical: phoenix 126; 150 Multiple queries, cloud: express 711; 900 Fortunes, physical: express 1.9; 4 Fortunes, cloud: phoenix 30.1; 20 Data updates, physical: express 353; 400 Data updates, cloud: phoenix 1010; 1100 Plaintext, physical: express 24; 30 Plaintext, cloud: phoenix 89; 100

note: phoenix is the most commonly appearing framework here

best, responses per second: JSON serialization, physical: phoenix 148k ; 30k JSON serialization, cloud: phoenix 20k ; 10k Single query, physical: phoenix 63.5k ; 10k Single query, cloud: phoenix 10k ; 5k Multiple queries, physical: phoenix 4k ; 2.5k Multiple queries, cloud: express 700 ; 800 Fortunes, physical: express 51k ; 20k Fortunes, cloud: express 8.5k ; 4k Data updates, physical: express 1400 ; 1k Data updates, cloud: phoenix 488 ; 200 Plaintext, physical: phoenix 187k ; 30k Plaintext, cloud: phoenix 28k ; 10k

note: phoenix is the most commonly appearing framework here

taking the worst of the above result thresholds before and after each semicolon, and rounding towards worse, we have a set of candidate result thresholds for reasonably performant frameworks:

latency max, ms: JSON serialization, physical: 170 JSON serialization, cloud: 370 Single query, physical: 90 Single query, cloud: 160 Multiple queries, physical: 400 Multiple queries, cloud: 1400 Fortunes, physical: 50 Fortunes, cloud: 180 Data updates, physical: 1000 Data updates, cloud: 4000 Plaintext, physical: 500 Plaintext, cloud: 900

latency stddev, ms: JSON serialization, physical: 5 JSON serialization, cloud: 15 Single query, physical: 3 Single query, cloud: 7 Multiple queries, physical: 80 Multiple queries, cloud: 100 Fortunes, physical: 3 Fortunes, cloud: 10 Data updates, physical: 50 Data updates, cloud: 140 Plaintext, physical: 100 Plaintext, cloud: 100

latency avg+stddev, ms: JSON serialization, physical: 10 JSON serialization, cloud: 40 Single query, physical: 10 Single query, cloud: 40 Multiple queries, physical: 250 Multiple queries, cloud: 1000 Fortunes, physical: 10 Fortunes, cloud: 40 Data updates, physical: 500 Data updates, cloud: 2000 Plaintext, physical: 250 Plaintext, cloud: 500

latency avg, ms: JSON serialization, physical: 2 JSON serialization, cloud: 30 Single query, physical: 4 Single query, cloud: 30 Multiple queries, physical: 150 Multiple queries, cloud: 900 Fortunes, physical: 30 Fortunes, cloud: 40 Data updates, physical: 400 Data updates, cloud: 1100 Plaintext, physical: 30 Plaintext, cloud: 100

best, responses per second: JSON serialization, physical: 30k JSON serialization, cloud: 10k Single query, physical: 10k Single query, cloud: 5k Multiple queries, physical: 2.5k Multiple queries, cloud: 700 Fortunes, physical: 20k Fortunes, cloud: 4k Data updates, physical: 1k Data updates, cloud: 200 Plaintext, physical: 30k Plaintext, cloud: 10k

now we can apply these to the candidates results for their 'normal' variant to see who fails (and which other variants fail):

latency max, ms: JSON serialization, physical: falcon, play2-java fail at 200 JSON serialization, cloud: none ; play2-java-netty Single query, physical: none ; vertx-web-postgres and play2-java-jpa-hikaricp* and play2-java-ebean-hikaricp and play2-java-jooq-hikaricp Single query, cloud: none ; express-mysql and play2-java-jpa-hikaricp Multiple queries, physical: none ; express-graphql-* and express-postgres Multiple queries, cloud: none ; express-graphql-mysql Fortunes, physical: play2 non-netty fail at 230; express-mysql Fortunes, cloud: none ; play2-java-jpa-hikaricp Data updates, physical: none ; vertx-web-susom-postgres and express-graphql-mysql Data updates, cloud: none Plaintext, physical: play2-java fail at 715; express-chakra Plaintext, cloud: express fail at 2220 even though express-graphql-mysql at 650 and play2-java fail at 7700 even though play2-java-netty at 860 ; express-chakra and typescript-rest

latency stddev, ms: JSON serialization, physical: none ; falcon-pypy2 and play2-java-netty JSON serialization, cloud: none ; play2-java-netty Single query, physical: none ; play2-java-jooq-hikaricp*, play2-java-jpa-hikaricp*, play2-java-ebean-hikaricp Single query, cloud: none ; play2-java-jpa-hikaricp Multiple queries, physical: none Multiple queries, cloud: none ; express-mysql and typescript-rest and play2-java-ebean-hikaricp* Fortunes, physical: none ; play2-java-jpa-hikaricp and play2-java-ebean-hikaricp and play2-java-jooq-hikaricp Fortunes, cloud: none Data updates, physical: none ; vertx-web-susom-postgres and express-graphql-mysql Data updates, cloud: none ; typescript-rest and express-mysql Plaintext, physical: none ; express-chakra Plaintext, cloud: express fail at 185 even though express-graphql-mysql is 90 and play2-java fail at 1400 even though play2-java-netty is 50; express-chakra and typescript-rest

latency avg+stddev, ms: JSON serialization, physical: none ; falcon-pypy2 and play2-java-netty JSON serialization, cloud: none Single query, physical: none ; express-graphql-postgres and play2-java-jooq-hikaricp* Single query, cloud: none ; play2-java-jpa-hikaricp Multiple queries, physical: none ; express-graphql-* and express-postgres Multiple queries, cloud: none ; express-graphql-* and typescript-rest Fortunes, physical: none ; express-mysql and express-graphql-mysql Fortunes, cloud: none Data updates, physical: none ; express-graphql-* Data updates, cloud: none ; express-mysql Plaintext, physical: none Plaintext, cloud: play2-java fail at 2000 even though play2-java-netty is 90 ; express-chakra and typescript-rest

latency avg, ms: JSON serialization, physical: none ; falcon-pypy2 and play2-java-netty and express-graphql-mysql JSON serialization, cloud: none Single query, physical: none ; express-graphql-* and play2-java-jooq-hikaricp* and express-mysql Single query, cloud: none ; play2-java-jpa-hikaricp Multiple queries, physical: none ; express-graphql-* and express-postgres Multiple queries, cloud: none ; express-graphql-* and typescript-rest Fortunes, physical: none Fortunes, cloud: none Data updates, physical: none ; express-graphql-* Data updates, cloud: none ; play2-java-ebean-hikaricp* and express-mysql Plaintext, physical: none ; express-graphql-mysql (the only graphql one in this one) and express-chakra Plaintext, cloud: play2-java fail at 700 even though play2-java-netty is 35; express-chakra and typescript-rest and express-graphql-mysql the only graphql one in this one) and express-chakra

best, responses per second: JSON serialization, physical: none ; express-graphql-mysql JSON serialization, cloud: none ; express-graphql-mysql Single query, physical: none ; express-graphql-* Single query, cloud: none ; express-graphql-* Multiple queries, physical: express normal not provided but express-postgres fails wth 1.3k; express-graphql-* Multiple queries, cloud: express and play2 normal not provided but express-postgres and play2-java-ebean-hikaricp* fail with 580; express-graphql-* Fortunes, physical: express normal not provided but express-postgres fails with 8k; express-graphql-* Fortunes, cloud: none ; express-graphql-* Data updates, physical: none ; express-graphql-* Data updates, cloud: none ; express-graphql-mysql Plaintext, physical: express-graphql-mysql Plaintext, cloud: none

So now update the limits to include the fails. Note: i think there is something anomalous about play2-java latencies in plaintext-cloud, i won't be raising the latencies to match that one

latency max, ms: JSON serialization, physical: 200 JSON serialization, cloud: 370 Single query, physical: 90 Single query, cloud: 160 Multiple queries, physical: 400 Multiple queries, cloud: 1400 Fortunes, physical: 230 Fortunes, cloud: 180 (i'm making this 230 b/c it shouldnt be less than physical) Data updates, physical: 1000 Data updates, cloud: 4000 Plaintext, physical: 750 Plaintext, cloud: 2300

latency stddev, ms: JSON serialization, physical: 5 JSON serialization, cloud: 15 Single query, physical: 3 Single query, cloud: 7 Multiple queries, physical: 80 Multiple queries, cloud: 100 Fortunes, physical: 3 Fortunes, cloud: 10 Data updates, physical: 50 Data updates, cloud: 140 Plaintext, physical: 100 Plaintext, cloud: 200

latency avg+stddev, ms: JSON serialization, physical: 10 JSON serialization, cloud: 40 Single query, physical: 10 Single query, cloud: 40 Multiple queries, physical: 250 Multiple queries, cloud: 1000 Fortunes, physical: 10 Fortunes, cloud: 40 Data updates, physical: 500 Data updates, cloud: 2000 Plaintext, physical: 250 Plaintext, cloud: 500

latency avg, ms: JSON serialization, physical: 2 JSON serialization, cloud: 30 Single query, physical: 4 Single query, cloud: 30 Multiple queries, physical: 150 Multiple queries, cloud: 900 Fortunes, physical: 30 Fortunes, cloud: 40 Data updates, physical: 400 Data updates, cloud: 1100 Plaintext, physical: 30 Plaintext, cloud: 100

best, responses per second: JSON serialization, physical: 30k JSON serialization, cloud: 10k Single query, physical: 10k Single query, cloud: 5k Multiple queries, physical: 1k Multiple queries, cloud: 500 Fortunes, physical: 8k Fortunes, cloud: 4k Data updates, physical: 1k Data updates, cloud: 200 Plaintext, physical: 30k Plaintext, cloud: 10k

now let's look at the intersection of all frameworks (non-platform-type, sql db, realistic implementation, omit languages C C++ perl ur vala vb) that meet these criteria; (except double the allowable max latency because that's a very unstable statistic) (also bump up things under 10ms to 10ms):

first restrict by latency avg: (note: latency avg of data updates, cloud by 1100ms eliminated a lot, mb that one was too tight)

note: plaintext only has express-graphql-mysql out of express, so did not eliminate based on that note: plaintext eliminated a lot, not sure if that is a good test (the remaining guys were all over the place, so the threshold would maybe have to be raised quite a bit)

remaining is: actframework actix akka-http angel-postgres armeria asp.net asp.net core (actually i merged asp.net and asp.net core together by accident in this filter, sorry) beego blade duct es4x express fastify fintrospect fuel gemini giraffe goji hamlet hanami helidon http4k hyper iron jooby jooby 2.x kami kitura ktor micronaut None officefloor onyx perfect phoenix play2 ratpack raze reitit revenj revenj.jvm rocket roda-sequel sinatra sinatra-sequel snap spider-gazelle spock starlette tokio-minihttp vertx-web warp wizzardo-http yesod

i feel like some of those aren't actually present in many or any of the tests. After eliminating ones which weren't in either fortunes or multiple queries (physical), we have:

remaining is: actix akka-http armeria asp.net core blade duct es4x express fastify fintrospect fuel gemini giraffe goji hamlet hanami http4k iron jooby jooby 2.x kitura ktor micronaut officefloor phoenix play2 ratpack raze rocket roda-sequel sinatra sinatra-sequel spider-gazelle spock starlette tokio-minihttp vertx-web warp wizzardo-http

next restrict by best responses per second:

note: json serialization only has express-graphql-mysql out of express, so did not eliminate based on that

remaining is: actix akka-http armeria asp.net core blade duct es4x express fastify fintrospect gemini giraffe http4k jooby jooby 2.x ktor micronaut officefloor phoenix play2 ratpack raze roda-sequel sinatra sinatra-sequel spider-gazelle spock starlette tokio-minihttp vertx-web warp wizzardo-http

next restrict by (latency stddev and latency avg+stddev both bad; ow i'm not sure if latency stddev is important):

remaining is: actix armeria asp.net core blade duct es4x express fastify giraffe http4k jooby jooby 2.x ktor officefloor phoenix play2 ratpack roda-sequel sinatra sinatra-sequel spider-gazelle spock starlette tokio-minihttp vertx-web warp wizzardo-http

next restrict by (latency max (2x threhold)):

remaining is: actix armeria asp.net core duct es4x express fastify http4k ktor officefloor phoenix play2 roda-sequel sinatra sinatra-sequel spider-gazelle spock tokio-minihttp vertx-web

i still feel like a lot of these arent being tested. Eliminate all that are missing from at least 3 scenarios (even aspcore is missing from 2). Remaining is:

actix armeria asp.net core duct es4x express fastify http4k ktor officefloor phoenix play2 roda-sequel sinatra sinatra-sequel tokio-minihttp vertx-web

next restrict by latency avg+stddev:

(note: armeria was only removed for a slight miss on json serialization physical) (note: ktor was only removed for a slight miss on fortune)

remaining is: actix (Rust) asp.net core (C#/.net, popular) duct (Clojure) es4x (js vert.x) express (js, popular) fastify (js) http4k (kotlin) officefloor (java) phoenix (elixir) play2 (java, popular) roda-sequel (ruby) sinatra (ruby) sinatra-sequel (ruby) tokio-minihttp (rust) vertx-web (Java, Kotlin, Scala, Ruby, and Javascript)

note: typescript-rest appears in there and should have been disqualified if it were its own framework, but i think it's an instance of express.

for my purposes:

asp.net core (C#/.net, popular) duct (Clojure) es4x (js vert.x) express (js, popular) fastify (js) http4k (kotlin) officefloor (java) phoenix (elixir) play2 (scala, java, popular) roda-sequel (ruby) sinatra/sinatra-sequel (ruby) vertx-web (Java, Kotlin, Scala, Ruby, and Javascript)

the intersection between these and the ones i've previously heard of is:

asp.net core (C#/.net, popular) express (js, popular) phoenix (elixir) play2 (java, popular) sinatra/sinatra-sequel (ruby)

now let's compare with my earlier analysis, at the top of this page, from 2017:

at that time i ended up with finalists

express (JS, full ORM) gin (Go, raw ORM) (also some other go frameworks but they were less popular) phoenix (Elixir, full ORM)

then eliminated phoenix due to unpopularity. btw Phoenix has gotten more popular since 2017 but is still unpopular.

grape, sinatra, bottle, django, flask were eliminated earlier due to performance

even earlier on i had said:

3 tiers: fast: falcore, gin, go, goji, kami medium: phoenix (may be a little better than the other mediums), gemini, express, dropwizard slow: sinatra, flask

comments on variants of the 2020 choices:

note that neither asp.net nor elixir are officially supported languages on Heroku, although the offical documentation ofor Elixir Phoenix tells how to deploy on Heroku (with some limitations, see https://hexdocs.pm/phoenix/heroku.html#limitations and, tangentially related, https://gigalixir.readthedocs.io/en/latest/main.html#mix-vs-distillery-vs-elixir-releases ): https://devcenter.heroku.com/articles/buildpacks#officially-supported-buildpacks

another thing for me to consider; although i'd like to separate the API from the frontend, that does hurt perf, and more importantly, realistically i need to care about ease of onboarding open-source devs, and installing a zillion services is harder than installing one thing. Ease of onboarding open-source devs also argues against using lesser-known languages such as Elixir, and may argue against asp, although i'm not sure about that.

also, for projects that i intend other people to run, ease of deployment is a concern, and again, making users deploy a zillion services may annoy them.

in fact, for projects that i intend other people to run, i should make sure that they can easily be run on widely available shared hosting services.

i took a quick look around, these tend to support at most php, python and django, ruby and rails, node and express, perl; but not elixir (sometime more but not 'one click installs'). eg: https://aws.amazon.com/lightsail/features/?opdp2=features

interestingly last time i looked at this rails and django were hard to find hosting for also, but now they are, if not easy, feasible.

also i looked into express a little more, unlike phoenix and django it doesnt have CSRF and secure headers by default. it really is more like a flask and not a phoenix or django.

also django is slowly moving towards async, and there is already an ASGI replacement for WSGI standard. and django seems to have more development velocity on github.

so i think i'm being pushed back towards django after all.

so let's take a closer look at django's benchmarks:

https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=db&f=zg20o7-zik073-zik0zj-zik0zj-zhxjwf-zijunz-hra0hr-zik0zj-zik0zj-e7

since fortune is probably the most representative benchmark of real use, this is probably fine to start on. Maybe by the time it matters, Django will have gone async and will be able to improve the multiple queries and data updates performance.