nodejs vs golang: 7 benchmark battles that will change your cloud architecture forever

why benchmark node.js vs golang in the first place?

if you are building a cloud-native application, you probably juggle two hats at once: devops for pipelines and full stack for code. choosing the wrong run-time can double your ci/cd time and your cloud bill. these seven head-to-head benchmark battles give you real numbers to share with your cto, instead of another heated reddit thread.

the lab set-up we used

all tests ran on the same c6i.xlarge aws instance in us-east-1. we pinned:

  • node.js 20.x lts (with --max-old-space-size=4096)
  • go 1.21 (compiled with go build -ldflags="-s -w")
  • docker containers (same cgroup limits)
  • 1000 warm-up rounds, then 10 000 requests per concurrency level

the code, terraform files, and raw results live in a public repo so you can reproduce the numbers on your student credits.

battle #1 – hello world latency

what we measured

plain http get that returns {"msg":"hello"}.

the scorecard

concurrencynode.js p99 (ms)go p99 (ms)winner
10.80.3go
1005.22.1go
1 00011.44.0go

take-away: go’s net/http multiplexing keeps latency lower even before you add any optimization.

battle #2 – cpu-bound prime sieve

what we measured

calculate the 50 000th prime number, repeated 10 000 times.

code snippets

// node.js – worker_threads to avoid blocking the event loop
const { worker, ismainthread, parentport } = require('worker_threads');
if (ismainthread) {
  new worker(__filename);
} else {
  // prime logic here ...
}
// go – simple goroutine pool
func worker(id int, jobs <-chan int, results chan<- int) {
    for j := range jobs {
        results <- prime(j)
    }
}

results

  • node.js: 2.8 req/sec per vcpu
  • go: 11.4 req/sec per vcpu

devops tip: if you run node.js for cpu-heavy tasks, budget more horizontal replicas instead of vertical scaling.

battle #3 – json serialization throughput

we marshalled 1 mb json payloads back-to-back. golang’s built-in encoding/json edged node.js 18 % because it avoids reflection overhead after compile time.

battle #4 – memory footprint

idle rss50 rps rss500 rps rss
node.js 38 mb120 mb340 mb
go 12 mb42 mb110 mb

full stack note: on a memory-bound kubernetes cluster, 3 node.js pods equal ≈10 go pods. that affects your node pool sizing and your budget.

battle #5 – startup time < 1 s

knative cold starts matter for serverless. go compiled binaries start in ~70 ms, while node.js needs 200–300 ms to parse and jit. if your seo strategy relies on fast time-to-first-byte at global edges, this 3× gap can change your cdn cache ttls.

battle #6 – horizontal scaling

we used hey -c 5000 -z 10s against a load balancer.

  • node.js hit 8 000 rps before cpu throttled.
  • go peaked at 22 000 rps using 70 % of the same cpu limit.

scaling rule of thumb: 1 go replica ≈ 2.5 node replicas for identical traffic.

battle #7 – devops & ci/cd cycle time

we ran 50 test suites in github actions:

  • node.js average 2 min 30 s (npm ci + jest)
  • go average 1 min 10 s (go mod tidy + go test)

multiply by 20 micro-services and you just saved 25 min every pull request.

quick decision matrix

criteriaif you value…pick…
event-loop i/o, rapid prototypingjavascript skill reuse, npm ecosystemnode.js
cpu bound, latency & memory efficiency, fast cold startsdevops cost < 50 %go

next steps for your cloud architecture

  1. fork the repo and add your own workload (graphql? grpc? mongo vs postgres).
  2. add --cpus=0.5 and re-run; watch how noisy neighbours influence each run.
  3. plot cost vs performance on an aws calculator spreadsheet; share it with students in your next meet-up.

remember: benchmarks are only data points, but data beats dogma every day. pick the run-time that makes your team sleep better, your deploys green, and your invoices smaller.

Comments

Discussion

Share your thoughts and join the conversation

Loading comments...

Join the Discussion

Please log in to share your thoughts and engage with the community.