With the rise of online games and online multiplayer games in which the players can play together in real-time or single-player online games which are played on cloud servers. The technology behind the networking is not always very complex. And UDP is the protocol which is responsible for low latency on of the most important factor.
What is in UDP and why UDP over TCP
User Datagram Protocol(UDP) is a connectionless protocol and doesn’t require data to be consistent which is not the case with TCP.
UDP provides a process to process whereas TCP provides host to host. UDP packets have a limit of 512bytes if they are larger they get automatically truncated.
The advantage of UDP is that it produces minimum bandwidth waste and blazing fast speed.
If you don’t have gaming hardware so you that can’t play the best games in the world or you’re playing them and you’re getting like 20 frames per second. With the cloud, you can game and stream in real-time with low latency from a cloud machine somewhere.
Most of the cloud providers have like Luna from amazon and Stadia Google and some have stricter requirements and are tuned especially for gaming.
If you have very little bandwidth UDP might as well not even use it right. TCP needs to work under all those conditions so we can narrow the use case. Even for making local servers while port forwarding UDP is recommended over TCP.
Case with TCP
TCP works right out of the box and you don’t have to worry about losing data and it uses encryption, unlike UDP which makes it more secure but what happens when you use TCP is that even with the little bit of loss of data you may get stuck unable to move in some case game may even crash beome unplayable and high latency is not tolerable in online gaming.
In UDP it keeps on transmitting data without any response from DNS which further makes it faster.
Concept of queuing in UDP
In UDP from the client-side when a process starts it reports a port number for UDP software for every port number there will be an incoming and outgoing process. when the process will terminate the queue will also terminate.
The client can send a message from the outgoing queue and receive it from the incoming queue and after adding the header it delivers it to the IP address. In case if there is a lot of traffic in the outgoing queue so it can stop taking messages for a moment.
Whenever a message arrives at the client UDP will check if the incoming queue is created for the destination port, for example, there is a packet that arrives at port 52000 first UDP will check if there is any queue for 52000 if not it will discard the datagram. NOTE: In client-side port number is generated randomly.
And the incoming queue can also overflow in that case as the buffer is limited UDP drops the datagram.
On the server-side, the server asks for incoming and outgoing using a particular port number, unlike the client-side where port numbers are random. The server-side queue will keep on running as long as the server is active. the message that arrives on the server is redirected to the same queue. And in the case of queue overflow, it does the same thing that is to discard the message.
TCP and UDP both so it seems like they belong on the same level but nothing could be further from the truth TCP is intensely complicated and extremely wrapped and it handles congestion control as the internet deals in packets TCP handles all of retransmission and reliability.
UDP does none of that it’s essentially like a thin abstraction over an IP packet that adds a little bit of a header. If you send packets out almost like raw packets over this network sometimes they get there sometimes they don’t.