shivansh vij, founder & CEO
Published Aug 24, 2022
Today we’re announcing fRPC, an RPC framework that's designed from the ground up to be lightweight, extensible, and extremely performant. We built fRPC because we loved the idea of defining our message types in a standardized proto3 format and having the protobuf compiler generate all the necessary glue code for us - but we didn't like the overhead of encoding and decoding messages in the protobuf format. Instead, we needed a wire protocol that was lighter and faster than HTTP/2.
fRPC offers a few major improvements over existing RPC frameworks like gRPC:
fRPC works with the proto3 syntax, meaning developers can bring their existing proto3 files and tooling. Under the hood, fRPC uses a completely custom messaging format and generates a highly optimized client/server implementation - the result is an RPC framework that substantially outperforms and outscales existing frameworks like gRPC.
fRPC is available now in early alpha under the Apache 2 open-source license, and documentation is available on https://frpc.io. We’ll also be releasing fRPC for Rust and TypeScript soon, with more languages to follow!
So what does the "f" in fRPC stand for?
If you’re looking for an RPC framework that can easily handle 2M+ messages/sec, we think you’ll find fRPC to be exactly what you’re looking for.
But exactly how fast is fRPC? Well, we originally designed fRPC for our own messaging needs at Loophole Labs, where we needed to send both large and small amounts of data in a latency-sensitive manner. We also needed it to be massively scalable, able to handle thousands of concurrent connections and able to service millions of messages.
I won’t go into how we achieved this here, but we are working on a series of engineering-focused blog posts that will talk in-depth about how fRPC works under the hood. In the meantime, we recommend you check out the fRPC Github Repository, or take a look at some of the generated fRPC code - we made sure the code generated by the protoc compiler is easy to read and understand.
Now for some benchmarks! We can’t just claim that fRPC is faster than a battle-tested tool like gRPC without backing it up with an apples-to-apples comparison. These benchmarks are publicly available at https://github.com/loopholelabs/frpc-go-benchmarks, and we encourage you to run them for yourselves.
To make sure our benchmark is fair, we’ll be using the exact same proto3 file as the input for both fRPC and gRPC. Moreover, we’ll be using the exact same service implementation for both the gRPC and fRPC servers - the generated service interfaces in fRPC are designed to look the same as in gRPC, so using the same service implementation is extremely simple.
We’ll be running 2 different benchmarks with an increasing number of concurrent clients to show off both the throughput and the scalability of fRPC when compared to gRPC.
It should be noted that the benchmark service itself simply echoes data back to the client without doing any work - the point of the benchmark is to show off the throughput and scalability of fRPC, not the performance of the service itself. That being said,
We've also created benchmarks that do perform work and those are available in both the publicly available benchmark repository, and on our documentation site, https://frpc.io.
The following benchmarks were performed on a bare metal host running Debian 11, 2x AMD EPYC 7643 48-Core CPUs, and 256GB of DDR4 memory. The benchmarks were performed over a local network to avoid inconsistencies due to network latency.
In this benchmark each client will repeatedly make RPCs to the fRPC or gRPC server using a randomly generated 32-byte message, and then wait for a response.
In each run we’re increasing the number of concurrently connected clients and we’re measuring the average throughput of each client to see how well fRPC and gRPC scale.
From the graph above it’s obvious that fRPC consistently outperforms and outscales gRPC - often by more than 2x. In the case of 8192 connected clients, fRPC’s throughput is still 112 RPCs/second while gRPC drops to only 29.
That means that clients using fRPC get almost 4x more throughput than gRPC using the same services and the same proto3 files.
Now let’s look at how fRPC servers scale as we increase the number of connected clients. For this benchmark, we’re going to make it so that each client repeatedly sends 10 concurrent RPCs in order to saturate the underlying TCP connections and the accompanying RPC server.
As before, we can see that fRPC consistently outperforms gRPC - but as we increase the number of clients it’s also clear that fRPC does not get as slowed down as the gRPC server does. It’s able to handle more than 2,000,000 RPCs/second and the bottleneck actually seems to be our bare metal host as opposed to fRPC.
In the case where we have 8192 connected clients, we can see that the gRPC server is able to handle just less than 500,000 RPCs/second - whereas fRPC can handle more than 4x that.
These benchmarks show off just a small portion of fRPCs capabilities, and we encourage everyone to run these for themselves. We’ll also have similar benchmarks for other popular RPC Frameworks available on the fRPC docs site, as well as benchmarks comparing fRPCs messaging format with protobuf and other serialization frameworks.
Performance is great, but it’s not the only reason we built fRPC. One of the main requirements our team had that led to the creation of fRPC was the ability to do things that just weren’t possible with gRPC.
Because of its architecture, developers actually have the ability to turn fRPC off, and retrieve the underlying TCP connections so they can be reused for something else.
Within our team this feature has been extremely popular because it’s allowed us to establish some fRPC clients, send a couple RPCs for authentication, and then turn off fRPC and use the existing TCP connections directly with an etcd or PostgreSQL client!
What’s more, fRPC also allows developers to deviate from the standard Request/Reply pattern that RPC is so famous for. It’s now actually possible to implement custom messaging patterns alongside your RPCs using the same framework, same code base - and the same clients.
Want to reuse your servers as message brokers for a low-latency Pub/Sub system? How about reusing clients to stream metrics in real-time (while also making standard RPCs)?
Frankly, we have no idea how developers will take advantage of these features - but within our engineering team it’s already been described like “taking off handcuffs we didn’t even know we had”.
With fRPC it’s now possible to implement custom messaging systems alongside standard RPCs - something that would normally require multiple frameworks and a host of niche knowledge to achieve.
Since its conception, what’s always drawn developers to gRPC is the ability to define message types and handlers in the proto3 format - and have the protobuf compiler automatically generate the client and server code in whatever language is required.
The idea was always to make cross-service (and more importantly, cross-language) communication a breeze. Developers flocked to gRPC simply because it was a reliable way of doing that.
With fRPC our goal was to deliver a high-performance RPC framework where developers could take advantage of its speed and extensibility without a steep learning curve. We achieved that by building an experience that feels very familiar to anyone who’s used gRPC in the past.
As an example, here’s a simple “Echo” service (written in Go) implemented in both gRPC and fRPC:
See the similarities? The service interface is exactly the same, and we can use the exact same implementation of the Echo
method for both!
It actually stands for Frisbee, the proverbial wizard behind the curtain. When we started out at Loophole, we realized we needed a messaging framework that was effortlessly extendable while also being able to scale to handle thousands of concurrent connections on a single node.
NATS was the first solution that came to mind, but we realized it wouldn't work for our needs because we needed to do much more than just Request/Reply and Pub/Sub - and we needed much more control over the route that messages took than what NATS could provide.
So, we set about building Frisbee - a messaging framework designed to implement other messaging frameworks, something that would handle all the plumbing for us and let our developers focus on the actual messaging logic instead.
It's important to note the distinction between fRPC and Frisbee. fRPC uses proto3 files to generate client and server implementations that use Frisbee under the hood. This is why fRPC is so performant compared to other RPC frameworks - the Frisbee messaging framework and wire protocol are lightweight and extremely optimized.
At its core, Frisbee is best described as a bring-your-own-protocol messaging framework, and the goal was to make it possible for developers to define their own messaging patterns and protocols, but have the actual lower-level implementations done for them by Frisbee.
With Frisbee you can implement any protocol or pattern you'd like, but since the RPC pattern is so common, fRPC allows you to generate the necessary client and server code for RPCs very quickly with nothing more than a proto3 file.
This is an initial release of fRPC for the Go programming language, and our goal for the ecosystem is to make it possible to use fRPC across multiple languages. Rust and Typescript are first on the docket, but more languages will follow depending on the needs of our community. We also don’t support all of the features that gRPC does yet, most notable being Streaming and OneOf message types. Rest assured, these are actively being worked on and we’d love for contributors to help out in making these available as quickly as possible. We also plan on growing the capabilities of Frisbee itself, not just by implementing it in other languages, but also by continuing to improve its performance and ease of use. We’d also like to implement an in-browser version of Frisbee that relies on WebSockets - this will make it possible to use fRPC directly in the browser without needing to modify your backend at all.
Check out our getting started guide to quickly get up and running with fRPC! We’d love to hear what you think about it and we encourage folks to contribute to both the documentation for fRPC as well as the project itself by making a pull request!
If something isn’t working right, please feel free to open an issue on our Github.
If you need help getting started, the #frisbee
Channel in our Discord is a great place to get help with all things Frisbee and fRPC! You can also follow us on Twitter to stay up to date on all things Loophole!