It's not clear how many records were generated in the sample dataset for these benchmarks. The methodology [1] and Github repo [2] show how to seed tables of 1k records, but it's not clear what size the results on the website were using.
Additionally, it seems that these benchmarks were run by executing queries serially, which isn't particularly interesting to me -- I'm more interested in what latency and resource consumption look like for each query under any type of load. The issues that we were seeing with Prisma [3] were latency in acquiring a connection and executing a query when many queries were running simultaneously (with idle connections available in the pool and a high connection limit on Postgres). I'd also be curious about the difference in performance for something like a nested `updateMany` where depending on how the query is generated, could deadlock or place a lock on more rows than necessary -- and where the generated queries actually matter.
Running queries serially against what is presumably 1k records per table doesn't seem particularly valuable.
Hey there, I worked on the benchmarks from the Prisma-side.
First off: Thanks for your feedback and questions, we're trying to create a meaningful comparison of ORMs and will be iterating on the benchmark setup and re-run them at some point, so your input is very much appreciated!
> It's not clear how many records were generated in the sample dataset for these benchmarks.
To clear the confusion: We've indeed used a sample size of 1000 records for the benchmark. If you click on the hyperlinked "real-world infrastructure" in the intro paragraph on benchmarks.prisma.io, the pop up shows you this information.
> I'm more interested in what latency and resource consumption look like for each query under any type of load.
This is great feedback and we were already considering adding this kind of information to the next iteration of the benchmarks. Thanks for this input!
> The issues that we were seeing with Prisma [3] were latency in acquiring a connection and executing a query when many queries were running simultaneously (with idle connections available in the pool and a high connection limit on Postgres).
That sounds like something our Engineering team may want to look into! Did you by any chance create a GitHub issue for this? It would be super helpful for the team to have more context on the issues you're describing here, so they can get fixed.
> I'd also be curious about the difference in performance for something like a nested `updateMany` where depending on how the query is generated, could deadlock or place a lock on more rows than necessary -- and where the generated queries actually matter.
This is something that hasn't been on our radar yet but I'm taking a note off for the next benchmark iteration as well!
Thanks again, please let us know if you have any further questions about Prisma or the benchmarks.
It's not clear how many records were generated in the sample dataset for these benchmarks. The methodology [1] and Github repo [2] show how to seed tables of 1k records, but it's not clear what size the results on the website were using.
Additionally, it seems that these benchmarks were run by executing queries serially, which isn't particularly interesting to me -- I'm more interested in what latency and resource consumption look like for each query under any type of load. The issues that we were seeing with Prisma [3] were latency in acquiring a connection and executing a query when many queries were running simultaneously (with idle connections available in the pool and a high connection limit on Postgres). I'd also be curious about the difference in performance for something like a nested `updateMany` where depending on how the query is generated, could deadlock or place a lock on more rows than necessary -- and where the generated queries actually matter.
Running queries serially against what is presumably 1k records per table doesn't seem particularly valuable.
[1] https://www.prisma.io/blog/performance-benchmarks-comparing-...
[2] https://github.com/prisma/orm-benchmarks/
[3] https://docs.hatchet.run/blog/migrating-off-prisma
Hey there, I worked on the benchmarks from the Prisma-side.
First off: Thanks for your feedback and questions, we're trying to create a meaningful comparison of ORMs and will be iterating on the benchmark setup and re-run them at some point, so your input is very much appreciated!
> It's not clear how many records were generated in the sample dataset for these benchmarks.
To clear the confusion: We've indeed used a sample size of 1000 records for the benchmark. If you click on the hyperlinked "real-world infrastructure" in the intro paragraph on benchmarks.prisma.io, the pop up shows you this information.
> I'm more interested in what latency and resource consumption look like for each query under any type of load.
This is great feedback and we were already considering adding this kind of information to the next iteration of the benchmarks. Thanks for this input!
> The issues that we were seeing with Prisma [3] were latency in acquiring a connection and executing a query when many queries were running simultaneously (with idle connections available in the pool and a high connection limit on Postgres).
That sounds like something our Engineering team may want to look into! Did you by any chance create a GitHub issue for this? It would be super helpful for the team to have more context on the issues you're describing here, so they can get fixed.
> I'd also be curious about the difference in performance for something like a nested `updateMany` where depending on how the query is generated, could deadlock or place a lock on more rows than necessary -- and where the generated queries actually matter.
This is something that hasn't been on our radar yet but I'm taking a note off for the next benchmark iteration as well!
Thanks again, please let us know if you have any further questions about Prisma or the benchmarks.