Skip to content
Vodcast

Blog: Duck Tapes Transcript: James Snell's NodeJS OC Talk

08/08/2022

GreenEnergy Finland Oy is an importer and wholesaler of solar power plants. The company also has a long history of investing in solutions that allow consumers to monitor, manage and measure electricity production and consumption. With Vincit’s help, these services were now also made available in a new, user-friendly app.

James Snell recently spoke at a NodeJS OC event, giving a presentation entitled "A QUIC bit of fun with Node.js," in which he lays out the problems with http/2 that he aims to solve in http/3 using QUIC.

Duck Tapes is available at ducktapes.fm or anywhere you listen to podcasts. Below is a transcription of 11/8/19's bonus episode, which contains the full audio of James's talk from NodeJS OC's recent comeback event.

Alright, so Hello. I'm James. Jasnell, pretty much everywhere Twitter, GitHub. I am head of research at NearForm. Specifically, we just recently created a new group called NearForm research. NearForm itself is a really a services company. But we use so much open source, right and we've been so active in the open source community, we decided to formalize that. NearForm research, it is an R&D group, but instead of focusing on just developing proprietary IP for NearForms own use. What we focus on is open source technologies for the good of the entire ecosystem. So it is really kind of you putting action to that philosophy of giving back to open source.

NearForm derives so much benefit from the open source community. The company grew out of open source. It really is a corporate imperative for us to make sure that we are giving back. And if any of you, do work for companies that are using open source, I strongly suggest just stop and think about how much value your company is deriving from open source and then try to think about how much of that value is actually making it back to the authors, to the maintainers.

Usually it's very-very little, right. And we need to start figuring out ways of sponsoring that work, giving back to that work. The experiment, months ago it was disruptive. But it was like, "He's trying." Right. Trying something. We need to do more of that, right. So strongly encourage you, and I love the talk. It was fantastic.

Alright, funny story. But a year ago, I'm sitting on my couch writing some code. My youngest, my 14 year old, he's taught himself like four programming languages in two years. He likes to sit there write code. So he always comes up and asked me what I what I'm working on. And you know, it comes up and ask "Hey, you still are working on HTTP/2?" I'm like "No. Working on HTTP/3."

He's like, "Didn't you just finished two?" Yeah. Things move on. I'm going to talk about HTTP/3 and Node. And specifically a protocol called QUIC. Alright. So, QUICk recap. HTTP/2 in Node, we introduced this in core back in 2017. Very familiar API, straightforward. Requires module, create a server, tell it to listen for something, right.

Familiar API, sports multiplex plus streams flow control all of this stuff, you can. All of the new wonderful features of HTTP/2, except you can't actually use it. So why can't we actually use it? Well, if you're talking from a Node server to another Node server, right. Or Node instance to Node instance, works great. If you're talking about, if you're all running in a single AWS cluster, right. Right, it all works fine.

The problem, though, is when you actually want to do something more useful with it, right. So all of the browsers, they're out there, you know, even on the phones, right. The phones have actually had HTTP/2 for quite a while. Those can talk to your middle boxes like Nginx, no problem, right. HTTP/2 connection works there, works fabulously. You can launch your Chrome Dev Tools. And you can see that "Hey, there's HTTP/2 traffic going on there."

The problem is Nginx and pretty much every other middle box provider out there, will not speak HTTP/2 back. All right, it'll only support HPPT/1. So standard practice, whenever you put no doubt on the server. Who puts no directly out on the internet, right. You know, with nothing in front of it. Good. Thank you for not raising your hand.

Never actually exposed a Node server directly to the internet, please. You always want to have something like an Nginx in front of it. Right. Especially If you're doing any kind of TLS sitting on a security, secure connections, know just is not a great endpoint for doing encrypted connections, right. But because we can't actually talk from Nginx, back to Node over HTTP/2, you can't actually use it. Right. You can only have the Node stuff back here or Node stuff up here talking to HTTP/2 to each other.

So really limits the use and utility of the other protocol. What was some of the other problems with it? HPPT/1, and this is one of the key things, if you ever go back to just basic, HTTP 101, right. It's stateless, right. It's just you're sending messages back and forth from one message to the other. There's no state that you have to maintain. HTTP2 completely change that. Right. HTTP/2 became a state full protocol in one very specific aspect.

HTTP/2 has a header compression protocol. Right. So in HPPT/1, when you send headers back and forth with your message, typically you're sending a huge amount of wasted repetitive content. Imagine how many times for instance, you're sending your user agent string with every request, right. Or with every response, you're getting this date header back. Well, date header and HTTP/1 is 29 bytes every time, right. And it's being sent back with every request. Right.

So there's a huge amount of wasted space in HTTP/1 traffic, just for sending headers. So HTTP/2, someone had the idea, let's use header compression. Okay, let's compress those things down. Let's send less data when we're sending headers back and forth. The idea to do that was to use a Delta encoding. And what a Delta encoding is, is I'm going to send you one piece of data now. And then the next time I send you something, I'm only going to send you the difference. And if I'm reusing anything I sent you before, I'm just going to tell you, "Hey, use the header I already sent. All right."

Well, unfortunately, that requires maintaining state. Right. And it's state that has to exist at every hop in your network. Alright, so from your client, to your middle boxes, however many middle boxes you have, right, and your origin server. You have to maintain that state in every place. And with HTTP/2 they said, we're not just going to do this once, you have to maintain two state tables, one for each structure. Right.

What it does ended up doing is creating a significantly higher resource consumption at the middle box layer. All of these middle boxes are designed to just take a packet of data and do a quick scan on it and forward along as quickly as possible. With HTTP/2 however, they actually have to grab it. Decompress the headers. If there's TOS, decrypt, then decompress, then look through the headers. And with HTTP/2 there was a little trick, because the headers can be in any order, right.

To find the ones that you actually need to do the routing, right. They had to cheat a little bit and say those had to come at the front of the list, but you still have to decompress the entire thing. And you cannot ignore. You have to decompress the entire block of headers, whether you're going to do anything with the message or not. Okay.

Now, imagine this, combined with all of the multiplex thing where you can send, any number of connections over a single TCP signal, any number of requests over a single HPPT connection. Right, you end up maintaining a ton of states where you didn't have to maintain state before. So the middle box. Developers said, screw that we're not doing it. Right. So they do it on the front end, because the brothers basically forced them to. They will not do it on the back end. Makes HTTP/2 significantly less useful.

But there is a larger problem. That's TCP head of line blocking, right. So TCP protocol, right. HTTP sits on top of TCP. TCP takes these messages, right. Divides them up into a bunch of individual packets, those packets are sequenced, okay. TCP is a strictly ordered protocol. That means, you have one, two, three, each one of these is like, right. They are the sequence. When those are sent, the first one has to be successfully received before the next one can be processed. Right.

So, if you have any kind of packet loss, if you have any kind of latency, it blocks everything else, you lose a TCP packet, everything else, all of the network traffic on that connection has to wait until you send that one that got lost. And it has to be acknowledged before you can continue sending the next one, all right. With HTTP/1, the impact of this is minimal, right. You're only sending one request and response over the connection at a time. So if one packet gets lost, you're only blocking one request and one response, right.

With HTTP/2, you can have any number of requests and responses flowing at any given time. All right. So what happens if one of those TCP packets gets lost? How many connections are you blocking then? How many requests and responses are you blocking? Right. Okay, it goes up significantly. This was kind of a, an unintended consequence. This was something that had not been anticipated. If you look at long distance connections, specifically, ones that are either very unreliable mobile connections or some are networks that are not very well developed countries, or just long distance connections going over the Atlantic right.

With HTTP/2 traffic, you see significantly higher latency and packet loss issues, then you would be HTTP/1. And the protocol that was supposed to be faster and solve all kinds of, had a blind blocking and all these kind of thing, at least kind of issues for HTTP. And they're just making it worse, but in a different way. And at a lower level. So, HTTP/2 is out there, everyone's using it, it's great.

I recommend if you can find uses for it, go use it. And the effort we took to actually get it implemented in Node was, it was very worthwhile effort, even though it's not super useful, right. But we needed a way to actually resolve some of these issues. So we don't forget about it, the utility will be limited. And that's where QUIC comes in. All right.

QUIC is a UDP based transport protocol. Alright. You've probably heard about UDP at some point, but can see us, kind of thing. It's the thing that, tons of jokes are told about it right. They tell you a UDP joke, you wouldn't get it. It's very unreliable. Right. It is not uncommon for UDP based protocols to just be considered kind of toy protocols, right. Or very, very specialized protocols. The working group that was been working on HTTP/3 decided to go with this for one very-very specific reason.

UDP packets are not ordered. They're completely independent of one another. Right. You can send 100 of them. If one of them gets lost. It has no effect on the other 99. You just resend the one that got lost. Right. But how do you know that that one got lost? If you look at UDP, just in general, as it's currently defined. There's no sequencing, there's no acknowledgement there's no reliability, there's no flow control. If something gets lost, you just simply do not know that it got lost. Okay.

That is where QUIC comes in. But the nice thing is no head of line blocking, we completely eliminate it, right. There is no line there, okay. And the protocol, so we completely get rid of that. So what does QUIC do? QUIC adds error handling. So if a packet gets lost, we know it got lost. How does it do that? Every packet has an acknowledgement packet. So I sent it, I'm going to get an acknowledgement back. Unlike TCP though, those are not sequenced. I can keep sending everything else. While I'm waiting for that acknowledgement. If I don't get it within a reasonable period of time, I assume it's lost and I'll resend that one packet. Okay.

Every one of those packets has a sequence number. Theoretically I could receive the entire sequence in reverse and still be able to put it back together logically. All right. They can come in a random order. And I could still put it back together. Now there's some limits there with the flow control, it's added. One of the other challenge with the UDP is it's extremely easy to flood the network.

With all this information. So QUIC adds TCP like flow control. It says, "Okay, you can only send me this much at a time." All right. "Okay, I've consumed that send me a bit more." All right. There's the built in encryption, is the nice one. QUIC actually builds in the TLS 1.3 handshake directly into the protocol. So it's not this additional layer on top. It's built into the actual establishment of the connection. So when you start QUIC flow, it starts with a series of TLS handshake packets.

Once those basically establish the secret then transitions into what's called the application flow, right. And start sending application traffic. You cannot separate those. You cannot have QUIC without TLS. It is built into the protocol. So you have that encryption, it is no longer optional in any way. That has caused some headaches that I'll get into later.

And we have Bidirectional and unidirectional stream. So one of the cool new features of HTTP/2, IS push streams. The server could initiate a push, right. It's always been kind of questionably useful, because the server doesn't know what to push or when to push it. So the best practice right now is, just go ahead and do it. Right.

The client will cancel it if you're not using it. But it doesn't end up being some more. You're trading off bandwidth usage. With QUIC we have this idea of unidirectional streams, which is basically the same basic idea, the server can initiate a stream back to the client without the client asking for it.

With HTTP/3 that's used in a couple of different ways, primarily as a control channel. And I'll get into that a little bit later. All right. So QUIC and Node. Here's the simple version of it. Right. We've been working on the API. It is a work in progress. And when I say a work in progress. I mean, we're breaking it daily. Literally. It was about two months ago, I asked for. We're using a library called NGTCP2. There is a fantastic guy in Japan [inaudible 00:15:26]. Absolutely amazing engineer.

It was his library that we use for HTTP/2 and pretty much everybody's using it. Pretty much everybody's using his version of the QUIC implementation right now as well. Absolutely phenomenal developer. So I asked him to add one API to NGTCP2, which is still working progress. The spec is still a work in progress. He's like, "Yeah, no problem. I added it. Here you go update and you got it."

What I didn't know when I updated it is, he changed every other API as well. Completely broke it. In particular the TLF handshake. When I started, have you all heard the phrase "Yak shaving." Right. You start one process, and it leads you. You have to do something else to. It takes you to a different task and takes you to a different task. Well fixing this, adding this one API ended up, leading me to patch open SSL. So we were originally using a patch version of open SSL. That's what Hiro changed which patched version he was using.

Now we're using a set of API's that were written for boring SSL that were back ported to open SSL three. But at Node we use open SSL 111. So I had to back port to 111. And that was fun. So adding one API, and then it was just for error handling. Ended up being a new version of open SSL for Nodes. So that's fun. So that's some of the things of working with open source, It's still a work in progress.

But we got it working again. And this will set up a basic, QUICk server for you. All right. Now the one thing you're looking here is, what about HTTP/3? All right. HTTP/3 is a application protocol that sits on top of QUIC, right. QUIC is the transport protocol. You can use QUIC without having anything to do with HTTP in any way, right. So the nice thing about this implementation and Node, is you will be able to create your own protocols on top of QUIC. Okay.

It will take full advantage of all the features of QUIC, you'll have the TLS built in, you'll have the unidirectional streams, everything else. But you can use it for whatever you want. Right. And we're actually going to have additional protocols in Node, not just HTTP/3 that use QUIC, right. Right now I'm working on an experimental version of the inspector protocol that uses, instead of websockets that uses QUIC.

So there's different things that will be able to do with it. That are pretty interesting. It's pretty straightforward, you know. So you create a socket. This socket is a local UDP socket. It binds to a local UDP port on your system. You can tell it to either listen as a server, or you can tell it to act as a client. Or you can do both at the same time. All right. You give it some basic parameters. Here, we're telling it what ports to listen on. We're giving it the TLS details, the key, the certificate, certificate authority information. All right.

It's going to hold on to that. And with this, anytime it receives a request from a client to create a new connection, it's going to use that information. And that's going to call this socket on session. It's going to give you what's called a QUIC session object. And this QUIC session object is really only useful for one thing, creating or receiving streams, okay.

The stream. The QUIC stream. Is basically just a Node duplex. So if you're familiar with streams within Node, right. It is an object that you can use to read data, write data, okay. Very- very simple, very-very straightforward. With QUIC, a stream is just a flow of data. Right. There's no headers, there's no structure to the data at all. It's just, here's a stream of data.

HTTP/3 adds the headers, okay. On the QUIC stream objects, you'll be able to create your own protocols with this, there's going to be an option that if you want to create a protocol that has headers, right. There's going to be a way of creating that and emitting those and working those in. We're going to have a basically a hook, where you can specify what the headers are using common API and translate those into what the actual underlying serialization is.

HTTP/3, that's exactly what we're doing. Right. We're basically using those API's. And under the covers, we are creating, doing the mapping back to how HTTP/3 handles headers. All right. Now it's still using a stable header compression. But it's different. With HTTP/2 is called Hpack. With QUIC it's called Qpack. They had to create a new one, specifically because of the way QUICk works.

And with HTTP/3, what it's going to do is actually it creates the session, then it creates what's called a control channel, and then two header channels. The control channel is basically for things like connection management, right. And then the two header channels one is inbound headers, the other is outbound headers, right. It goes back to that stateful, when you talk about HTTP/2 the stateful has two tables you have to maintain.

With QUIC you have these two unidirectional streams that are created for managing that state, right. So there's a bit more complexity under the covers with HTTP/3, but hopefully most of it will be completely hidden from you and you won't care. Right. It's just there. All right.

But the server itself pretty simple. Creating the client, it's equally simple. Instead of just calling socket.Listen, you call it socket.Connect. And that's going to give you back a session object. Right. And it's going to be the exact same session, just a QUICk session object. And you use that to create a stream, we have an API called Open stream. All right.

You get that, you start writing data to it, and you can create that as a Unidirectional or Bi directional stream. All right. The API has been designed to be as simple as possible. It gives you full access to everything QUIC can do without any complexity, if you're already familiar with using Node for servers.

Again, QUIC design equally. Is a layer on top. So we have the IP, UDP. The really nice thing about this is that there are no modifications to UDP at all. Right. So on the network, QUIC traffic looks just like any other UDP traffic, right. So you can pass it around, you can route it around, do whatever you want with it, there is still routing information that at the UDP layer, and the QUIC layer, there is still completely transparent, are basically completely visible to middleboxes. So middleboxes, unlike HTTP/2 where they have to unpack, decompress the entire message, in order to be able to do anything with it.

With QUIC, they can just look at the QUIC header and the UDP header and determine where that thing needs to go. Right. They don't actually have to look at any of the headers for the HTTP request. They don't care. It is complete visibility at this layer to do anything that the network needs to do for routing, load balancing that kind of thing. Right.

Now the other thing is, like I said, you can completely erase this, replace it and do whatever you want with it, you can just use QUIC as it is, or you can create new protocols on top. The progress on it. The work is underway, we've been working on this for just under a year. We are working at it, all out in the open. So if you go to GitHub.com, Node.js QUIC, that's the repository that we're using.

It's going to be submitted as an experimental PR at the end of this month. There's still a lot of progress to be made. I can't claim that it's going to be fully operational, when we open that PR, but we want to kind of merge it into the mainstream with with Node very-very soon.It is a very complex piece of code. And it's getting harder and harder to keep it up to date with the current versions of Node.

So instead of working on it separately, we want to bring it in. Working on large features separately. In a separate repo, something we do frequently with HTTP two, is how we did worker threads, you know quite a few things. We do it that way to ease the disruption on the mainstream. But after a while, we got to bring them together. All right.

The work is being sponsored by NearForm and ProtocoL labs. And definitely got to thank Protocol Labs for basically sponsoring my time on this. But again, it's been it's hit or miss. There's still a lot of stuff to do, right. And the spec is not even expected to be done until later this year. Hopefully, if they hit their target, the code library that we're using NGTCP won't stabilize. And so maybe about a month or two after that.

And then our Node implementation is likely going to be experimental for at least a year. While we work out all the details. And it's not so much the underlying protocol implementation. It's making sure that all of the API's on top are exactly what they need to be that they're usable, that they're clear, concise. They're familiar. But also make sure there's enough security issues. I don't know if you pay attention to HTTP/2.

Earlier this summer, there was a massive security issue with HTTP/2, came out that every single implementation out there had exactly the same set of vulnerabilities with a series of denial of server attacks. They were basically completely missed in the specification. Everyone that implemented, just kind of completely overlooked that these flaws existed. And it all had to come down with how data was being processed In a HTTP connection.

We want to make sure that those kind of things aren't happening here. So we don't have that similar kind of issue. So we're actually building in a number of security capabilities in to the QUIC implementation that don't exist currently within Node. You'll be able to do things like actively track the data flow, right. Your rate of transfer, so you can see if a particular connection if an attacker is sending you headers very-very slowly.

You'll be able to detect that and respond actively, all right. You'll be able to see if you're being flooded, all right. With a ton of certain type of packets or whatever else. We're going to be able to track those things. And those are actually going to be exposed, it's not a capability that has existed in Node that will be part of the QUIC implementation.

So adding those features is what's really going to kind of make you take a little bit longer to get QUIC out of experimental. All right. Come help us. It's a lot of work right now. There's probably about four or five of us that have been working on this. I'm extremely happy. I've got a couple of them on my team and in NearForm that are helping on this. But they also are working on other things. So if you're looking at projects to contribute to, right. I would love to, kind of walk you through and kind of get bootstrapped on this, or any part of Node.

I do a thing. Basically, one for probably about an hour a week, I dedicate to helping mentor people in the community. If you have something, maybe have questions about Open source, if you just kind of walk through and just say, "Hey, help me understand something." You can schedule some time with me and I'll just kind of walk you through whatever it is. So if you want open a pull request on Node, let me know I'll help you open up your request on Node. You can reach out to me on jasnell on Twitter.

You can reach out to me anytime on that and I'm happy to do that. I love helping folks get started with contributing to Open source and doesn't have to be Node, it can be anything, any projects. Feel free to hit me up. And that's basically it. Like I said, it was a QUIC talk. There's not a lot to it. I can't demo a few things if you want and it's just going to be text on the screen but if you want to see it, we can. You want to see it. Alright.

Alright, let's switch over here. I'm going to have to increase my React Native font size here a little bit. All right, to give you an idea of just how big this has been though. So Node is a mix of, C++ and JavaScript development, right. Most of what in there is all C++. So we have, how many files we have?

So we have Node QUIC crypto, this is all the crypto code. And this is not even the open SSL part. So we've got just about, 1200 lines there. I think there's about 13 files here, 14 files, the QUIC stream, QUIC session. This is where most of the magic is happening in here. This is a ton of stuff, but let's switch over to look at some code.

Alright, this is a test file. It's actually doing quite a bit. It pretty much exercises every part of QUIC. Let me just kind of walk through some of what we're seeing here. Alright, so create socket, require QUIC. That's going to be your main entry point, right. This create socket is always where you're going to start, whether you're creating a server, whether you're creating a client doesn't matter. One of the things that we recently added to Node is key log support.

If you use Wireshark, for instance. You can now enable key logging and use Wireshark to decrypt to a less traffic in real time. So you can debug the flows that are going back and forth. It's absolutely phenomenal. Use it specifically for actually debugging Node itself, which is great. So we can do that. Here we're creating a server, retelling it. The server port here, we're going to change this based on whether we have key log on or not, because key log always has to be at the same port. But with no, we run our tests whether it's on a random port.

We want to validate address, what that means is. When a QUIC packet is sent from one end point to another endpoint, because of the nature of UDP, you can send one QUIC packet in one moment, and it will actually take one network path. You can send it again, it'll take a different network path. You want to make sure that you're validating, that a QUIC packet you receive is actually one you expected from the intended sender. So there's actually address validation that occurs and it's basically a challenge protocol, that if you receive a packet, right.

You can actually issue them a challenge and say, "Hey, are you really who you say you are?" Right, So that's actually built into the protocol as well. We have that in here. This diagnostic packet loss right here is something we add for specifically for debugging to simulate whether a packet gets lost or not. Don't use that in production. You can create your own.

LPN is how you actually identify your protocol. All right. It's part of TLS. Basically, it's an extension to the TLS handshake. It's what you actually say, "Hey, I'm going to speak to HTTP/2 [inaudible 00:31:20] to HTTP/3." Right.

So here, because you can create your own protocols, you can specify whatever you want here. You specify HTTP/2 or HTTP/3 it'll kick in HTTP/3 semantics. This is some key log stuff. Alright, so we're telling the server to listen, right here, pretty straightforward. This is just going to tell it to listen on the port, we configure the port earlier. I was going to say "I want to listen for server requests at the TLS details."

I'm going to press the clients, so that the clients can provide a certificate. I don't want to reject unauthorized. Which means, it's okay after the [inaudible 00:32:00] is not trusted. Alright. Specify a little bit, this max crypto buffer is basically a security mechanism. You don't want to have somebody flooding, not sending too much data. And once we are listening, we just say "Server on session."

And then we can actually start responding. Okay. There's some test things in here, I can skip over some of that. But when the session is secure, that means the TLS handshake is completed. In this particular case, we create a unidirectional stream, right. So the client hasn't actually sent anything. All they've done is establish a connection. In this particular case, since we're not using HTTP/3, we're just using QUIC, we're just going to go ahead and create a stream and send some data. All right.

Client hasn't actually requested anything at this point. It's actually quite powerful when you're creating your own protocols. Right. One of the experimental cases of this, I actually flipped the roles. Where, once the connection was established by a client, the server actually started acting as a client requesting data from the client. Right. So it was actually, it was just fun. I was just screwing around. All right.

When a session receive a stream from the client here, again, it's just a no stream. That's all it is. So right here, opening an instance of this file and sending the contents of this file in response. Right. If you've ever used streams in Node before, it's exactly the same. There's nothing different about it. Okay.

Let's see what else. Let's look down here for the. All right. When a server is ready here, I go ahead and create the client. So this shows you how to create the client. Yeah, I'm using a different socket in this case. But same thing, create socket, passing details here. To actually create the client, QUIC session are using this client Connect. All right, some other details in here. But when the handshake is complete, we're are going to receive the security event.

And from there we just, right. We come down here, this is some bunch of test stuff. From there, we just create the stream, send data on it. Right. Very-very straight forward. Now, if this version of the build is not broken, should actually work. So hold on.

And yes, I develop on Windows. This question came up earlier, why do I develop on Windows and not a Mac? Because the keyboard works. Alright. So, all right, so it actually worked. Alright, so he's here at the end. Alright, so we got two QUIC Sockets, they were created. They lasted, we actually had the duration and then a second set that existed. We see the number of bytes that were received in sense. Bytes received in sense, packets received, packets sent.

So we had a little bit of packet loss. There's one packet that was lost here. There were two packets that were. What is it? Packets receive, packet sense. Yeah, there were two packets that were lost that way. We ignored a couple of them or ignored one here. And the reason we would ignore a packet is if the. Lets skip the servers being torn down. Right.

We don't care about it, right. We may ignore it or if it's malformed, we may ignore it. Collecting statistics is not something Node has done before. This is one of the things that we're actually adding to, so you can actually see all this kind of stuff. Let's kind of look through some of the details here. This gives you some really low level details of QUIC.

How much time do I have left? We're good. Okay. All right. Is this interesting? Is it useful? Alright, so QUIC, every QUIC. This is a fun thing with QUIC. With TCP and HTTP/2, when you have a connection, right. It's actually bound to your IP address. Right. So if you're watching a video on your phone and you're on your home WiFi, right. And you're taking a walk, right. And it switches over to LTE, right. To the WiFi loss. What happens to your connection?

It drops. Has to be reestablished. Anybody that's been on a Zoom call, right. And done that knows it pauses, halts and then goes through this reconnection, right. That's because it is tied to the IP address and that network changes, the IP address changes, you have to re establish the connection. With QUIC, every QUIC connection is a search. Has what's called a CID. Connection ID, that is completely independent of the IP address.

So you can establish a QUIC connection. Now on LTE, switch to WiFi, and that connection remains valid. You do not have to re handshake, right. So I've actually tested this, you can switch the radio on and off. You'll see a little bit of lag, right. You get a little bit of latency as it tries to. Because it has to redo the path validation, right. And there's going to be some packet loss every time you switch, right. But it doesn't have to re handshake. You do the TLS handshake once and it goes up.

The other nice thing about it, is there's no time limit, right. There was a maximum number of packets that you can send over a QUIC connection. But that's the idea, is not time lived, right? So you can actually have a long lived connection, that's not actually sending data back and forth. Right. So it's not actively using bandwidth, but you're still maintaining the connection. And a handshake remains valid, right? So you can basically sleep a connection without having to redo the handshake. There's also TLS restart, so you can tear down the connection.

But if you save the key information from that session, right. Then you can create a new connection, pass it that handshake and resume the session. Right. So if your connection does get torn down for whatever reason, you just resume it without having to do the full TLS handshake. So there's several really cool things about QUIC and specifically tied to how they designed to CID system to work, right.

So any single connection is actually going to have multiple CIDs. Because as the path change, those CIDs are going to rotate and change. Some will expire, have new ones. So you are always going to see a pull of those things. Alright, let's see what else. The protocol is pretty chatty. And what you'll see is there's a lot of small packets that are being sent. A lot of small UDP packets. That is one of the concerns. So QUIC is not all roses, it does increase your network bandwidth quite a bit. Because it's sending out all of these acknowledgments, you having the data, all the acknowledgments, all the re transmissions.

If you have a lot of latency, if you have a lot of loss on your network segment, you're going to see a lot more traffic, right. So anybody that's planning for, what bandwidth you're going to use, you're going to see significantly higher bandwidth with QUIC. Alright.

All right. This is basically just show and I'm looking from the reverse up. So we received a QUIC packet. We're processing it. Let's see right here. So the minimum beta rate, this is interesting. This test, show one of the new capabilities that we have with QUIC and Node. You don't have with any of the other networking protocols. Is you can actively monitor the data rate and data transfer in the throughput while the connection is live. Right.

So we're doing that here. And it's done as a basically a histogram, you can be able to see percentiles. So basically, what percentage of your traffic is too slow. So you can accurately respond to slow hosts or slow clients. Let's what else. I think we just about done. It was data transfer back and forth. We got a little success. Like I said it's pretty chatty. In here we're seeing that the certificate information was provided. Not going to show all that kind of stuff because nobody wants to see [inaudible 00:40:56] certificate, is boring as hell. But yeah, so it is working. It is functional, which is good. Last week it wasn't.

We get about like a third of the way into this and then just Seg fault. And I've seen more Seg faults in this project than I ever have in my entire life which is absolutely phenomenal. I love it. My 14 year old was. For Halloween he was going to go as a Seg fault. He was going to start a picture, have this sort of a picture pasted to his chest. There was only about halfway printed and then a bunch of garbage and then the word Seg faults on there, which is such a nerd. Anyway, that's basically it. That's QUIC. Any questions?

[inaudible].

No. Okay. I'm going to say no, but it really depends on how the client is written. Right.

Got it, okay.

Alright, so you are in complete control. So in case you didn't hear the question. The question was, when you want to flip the relationship from the client and server, right. So once the connection was established the server, started acting as the client. That code that I wrote was basically, that the node client was doing whatever I told it to do. Right. So you write the code, you're in complete control of what data is being sent. There is nothing in Node that makes the transfer of data automatic, right.

So it's not like when you're talking to, like an Nginx server where you send a request for a file and you get that file back automatically. That's not built in. What we have is, it'll tell you when a connection has been established. It'll tell you when a stream has been created. It is up to you to connect that to actual data. [inaudible 00:42:48].

No, but you know, the working group is working through a number of potential issues with QUIC that do reveal information. So like I said, going back to the point about the headers, let me see here. Are the layering, right. So the layering is set up. So that UDP QUIC looks like normal UDP traffic. And it's going to have the destination IP, the source IP here. Right. And there's also a little bit of routing information in the QUIC header itself. Right. That is visible to the network. Right. So there is actually a concern about some privacy and some information becoming visible and being able to do basically statistical analysis on the packets based on where they're coming from.

[inaudible].

Yes, so there some concern there that the group is working on, but it's. A lot of it is just going to be in the making of the protocol. Right. It's not going to be a lot you can do about it. But in terms of like user level data, application level data, nothing in Nodes is going to make it automatic. You'll have to wire it up.

[inaudible].

Yep. And that's just endemic GDP.

[inaudible].

Yes pocket. There acceptable packet loss. There has not been, to my knowledge, a ton of experimentation with using QUIC with video. Right. Now, I know Google has done some work with this. There are some thoughts that it will be a nicer protocol to use, but there's not been real heavy experimentation with it. So it's going to be interesting to see where that goes. Especially with some of the newer features of QUIC, with the flow control on there. And the fragmentation of the data that occurs. It's really not sure what the latency is going to be with video.

But yes, all the browsers are working on support for this, quite a few of them are rolling out experimental Chrome, I think it's in. It's either Canary or beta channel, is actually roll it out there. So you can actually start using it there. And more importantly, than the browsers, all of the middle box vendors, Nginx and all the others. Have committed to supporting it as well. So unlike HTTP/2, right. They're all saying, "Yep, we will fully support it."

And they're fully on board with this approach. But there are a ton of unknowns, particularly around kind of what the additional latency might be for kind of high data buying kind of applications.

[inaudible].

Right, right. Okay, so the first question, what is the, how does the role of the middleboxes change? Right. So in like Nginx. Right now Nginx, one of our recommendations with Node is you always put like an Nginx in front of it to basically act as the TLS Terminator. Right. So if you have a secure connection, it comes into the Nginx, because it's able to handle that much more efficiently than Node can. Right. If you try to tell us terminate on Node, it just absolutely kills performance of Node. Right.

And part of that is the way that HTTP/2 or HTTP/1 just the TLS stack in Node is written, right. It's [inaudible 00:46:45] written C++, and then there's the JavaScript. When you have a TLS connection come in, there's a lot of bouncing back and forth across the C++ and the JavaScript boundary that occurs. And that is extremely slow. Okay.

So we tell everyone just put Nginx in front it. One Nginx is a more hardened, right there's a fewer chance of security issues that kind of thing. With QUIC, right. Changes somewhat, we do have TLS built in, there's no way around having, doing a TLS termination at Node. Right. The difference is with QUIC, we do all of the TLS at the Node, no part of it happens at the JavaScript player.

And there's none of that bouncing back and forth. So it is significantly faster to do that TLS termination with QUIC and node, than it was before. Now, the way the QUIC works, building it into the protocol into the packets. The middleboxes don't actually have to do a TLS termination anymore. They can route those packets without ever actually seeing into them. Right. They don't have to terminate, they can just take the packet, pass it on to Node.

The Node would be, and Node can do the TLS. So the role of the middlebox changes somehow. There's less than it actually has to do. It can just look at the packet forward it where it needs to go. The only difference to that, qualification about. Is if it wants to do HTTP header based routing.

[inaudible]

So but actually wants to look at the request, right. And then it has to do more, but if all it's doing is passing things around, I understand maybe unless that has to do. Okay. Second part of your question, just refresh my memory?

[inaudible]

Right. Right, right. Okay, so the trade off between the TCP with all the flow control and acknowledgments. [inaudible 00:48:48] all the same stuff layered in. Is the lack of the head of line blocking. That's the only difference. So with TCP, right. It blocks, everything stops. Until you have sent it.

With the UDP QUIC, you can keep flowing everything else and then just re transmit the things that are lost. The problem with that, is that with UDP, if you're doing a lot of re transmissions, you're sending the data once, you're sending it again, you're sending all the control information. You can control information, the acknowledgments themselves can get lost. There are just UDP packets. Right.

So even the acknowledgments have acknowledgments. Right. So you end up seeing a significant amount of additional bandwidth being used, with all this additional data being sent over the wire. So no head of line blocking, but higher bandwidth. And that's the trade off. Okay, any other questions?

[inaudible].

Yep. So yeah, the question about buffering. So in theory, the way QUIC is designed, like we took the flow Control piece out of it, you could send the entire stream of data in a completely random order, in which case, you end up having to buffer everything right.

And you can very quickly run into issues. This is specifically why the flow control mechanism is put into place. Right. So flow control basically gives you a window. And the window is a very specific size. And initially, it's very small. You can only send me data that fits within this window, and every packet is sent is given a size and an offset. So I'm sending basically from this point in the stream, this amount of data, right.

So the buffering only, is tied specifically to the flow control window. Right. So my window is here, you can only fill this buffer. If you send me anything outside that buffer, anything outside that window, I'm going to ignore it. I'm going to treat it, I'm not even going to acknowledge it, it's going to be treated as if it's just a lost packet. From the sender point of view, they will just re-send it.

If you're not actually fitting within that flow control window, if you're not adjusting it, it just ends up still being lost. Right. From the receivers point of view, though, they're actively protecting, you're only going to give me this much instead of me having protecting with this much.

Now, it's still a concern though. Buffering, the memory requirements are something that are still being figured out, right. To see just how fine tuned it needs to be or how much of an impact it's going to be. Specifically within Node. It's something that we're going to be testing extensively before bring it out of experimental.

[inaudible].

It is dynamic. So it's dynamic. And through the course of the session progresses, it will adjust dynamically, and the the user code can adjust it. So the user code will actually be able to determine within know the way that we're doing it. There's a stream to have back pressure, right. And basically it's, we control the flow of database on how you're consuming it, right.

The way that we do it, where that translates into QUIC, is as you are consuming the data on the JavaScript side with a Node, it is sending the flow control streams over QUIC. So if you're not reading data, right. You're not expanding the flow control window, the client will be told to stop sending. Right. As you read data, right? The flow control window expands. And the client is told to resume.

[inaudible].

Yes and no. So we're limited by a. IPV4 and IPV6 have slightly different maximum packet sizes to avoid fragmentation of the packets. The thing that will absolutely kill performance more than anything else are fragmenting the IP packets, right. So we want to keep those. All QUIC packets needs to fit within those limits. Right.

So that's where we try to stick to. Now we can fill anything up to those limits. But the smaller the packets, we send the chattier it is, right. And basically the additional latency we're going to have. So what QUIC tries to do is pack as much.

You can have as many individual frames within a single packet, as you can. A frame is like a, control frame, like a flow control window expand right. Or another frame, maybe a TLS encryption frame, right. Another one might be a data frame for stream, right. We try to fit as many of those into a single packet as we can. So. All right. Any other questions? Fantastic, this has been fun. Thanks.