Concurrency competition: Python, NodeJS and Golang

Concurrency competition: Python, NodeJS and Golang

I was watching last night a PyCon 2015 talk by David Beazley(Python Concurrency From the Ground Up:Live) and I watched carefully how the GIL affects some solutions he presented and how David used a ProcessPoolExecutor in order to use all the cores the machine has.

The code that David Beazley used for the talk are located in his github code(https://github.com/dabeaz/concurrencylive)

Because he created this simple performance test:

#!/usr/bin/env python
# perf2.py
# requests/sec of fast requests

from socket import *
import time

sock = socket(AF_INET, SOCK_STREAM)
sock.connect(('localhost', 25000))

n = 0

from threading import Thread


def monitor():
global n
while True:
time.sleep(1)
print(n, 'reqs/sec')
n = 0

Thread(target=monitor).start()

while True:
sock.send(b'1')
resp = sock.recv(100)
n += 1

I said to myself, let me just create similar programs in NodeJS and Golang in order to get a fair comparison among these languages concurrency capabilities, but mainly focussed on performance.

Let's us begin with connections per second.

David created several versions for handling a fibonacci server: simple servers that only return the result of computing fibonacci function. The function is just:

def fib(n):
if n <= 2:
return 1
else:
return fib(n-1) + fib(n-2)

He end up with two different servers: one using the ProcessPoolExecutor and another more async with more use of yield and yield from to create co-routines.

Let use use perf2.py to check how they perform in my old MAC Pro.

Using server.py

Here are the results:

2246 reqs/sec

2335 reqs/sec

2217 reqs/sec

2306 reqs/sec

2324 reqs/sec

2319 reqs/sec

2231 reqs/sec

2334 reqs/sec

2227 reqs/sec

Now using aserver.py:

2092 reqs/sec

2118 reqs/sec

1890 reqs/sec

1992 reqs/sec

2046 reqs/sec

2091 reqs/sec

As you can see the async is handling 200 requests less than  ProcessPoolExecutor version.

Let go with NodeJS:

/**
* Created by edilio on 12/18/16.
*/

// Load the TCP Library
net = require('net');


function fib(n) {
if ((n === 0) || (n === 1)) {
return 1
} else {
return fib(n - 1) + fib(n - 2)
}
}

// Start a TCP Server
net.createServer(function (socket) {

// Identify this client
socket.name = socket.remoteAddress + ":" + socket.remotePort;

// Put this new client in the list
// Send a nice welcome message and announce
socket.write("Welcome " + socket.name + "\n");


// Handle incoming messages from clients.
socket.on('data', function (data) {
var n = parseInt(data.toString());
var result = fib(n);
socket.write(result.toString() + '\n');
});

// Remove the client from the list when it leaves
socket.on('end', function () {
socket.write("good bye.\n");
});



}).listen(25000);

Results for NodeJS:

16223 reqs/sec

16185 reqs/sec

16059 reqs/sec

15395 reqs/sec

As we can see the results for NodeJS are a lot better than for python. Let's say 1300 reqs/sec more than with python servers. If I run more ./perf2.py the numbers go to 

11950 reqs/sec

13468 reqs/sec 

but for 2 instances now so really is making about 26000 reqs/sec. 

So NodejS is a lot better for this type of servers than Python 3.

Let see what happended with Golang. Here is the code for the server:

package main

import (
"net"
"fmt"
"strconv"
"io"
"bufio"
"runtime"
"funct"
)


func fib(n int) int {
if n <= 2 {
return 1
} else {
return fib(n-1) + fib(n-2)
}
}



func launchServer(dnc_host, dnc_port string) {
msg := fmt.Sprintf("Launching server...on %s:%s", dnc_host, dnc_port)
fmt.Println(msg)
// listen on all interfaces
ln, _ := net.Listen("tcp", dnc_host + ":" + dnc_port)

// run loop forever (or until ctrl-c)
for {
conn, _ := ln.Accept()

go handleConnection(conn, fib)

}
}


func handleConnection(conn net.Conn, fib funct.Formula) {
// will listen for message to process ending in newline (\n)

for {
message, err := bufio.NewReader(conn).ReadString('\r')
if err == io.EOF {
conn.Close()
return
}
if len(message) > 0 {
//fmt.Println("======\n")
//fmt.Print("Message Received:", string(message))
if message == "q\n" {
fmt.Println("received a quit message")
conn.Close()
return
}
i, err := strconv.Atoi(message[0:len(message)-1])
if err != nil {
fmt.Println(err)
fmt.Println(message[0:len(message)-1])
} else {
r := fib(i)
result := fmt.Sprintf("%d\n", r)
conn.Write([]byte(result))
}
}

}

}


func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
launchServer("127.0.0.1", "25000")
}

Look at the results:

19240 reqs/sec

17144 reqs/sec

19087 reqs/sec

18724 reqs/sec

19030 reqs/sec

If I run another perf2.py then the reqs per second go a little bit down:

17223 reqs/sec

17111 reqs/sec

17198 reqs/sec

17236 reqs/sec

but now we have 2 intances so really it means 17000*2 = 34000 reqs/sec. A lot more than in python ad a little bit better than NodeJS handling several thousands requests more. Besides that in Golang is very easy to use all available CPUs so really is very concurrent without any effort.

So at this point seems simple to just create servers like this one in Golang or NodeJS instead of Python.

All the code necessary for following the blog is in a fork from David original code(https://github.com/edilio/concurrencylive)

I hope this can be a reference for the future.

Current rating: 2.7