Overview
If you’ve played Counter Strike, the odds are pretty high you’ve heard someone complain about lag, rates, interp, or lerp. The terms are thrown around pretty commonly, but a true understanding of the problem isn’t quite so common. In this comprehensive guide, I’m going to cover the topic of Source Netcode and–more specifically–how it ties into your Counter Strike: Source experience. After reading this, you’ll be one educated s.o.b. and you’ll be able to fine tune your Counter Strike experience for the absolute best possible performance and the right feel for your play style.
Introduction
Counter Strike, in all of it’s various iterations and mods, has afforded players the ability to adjust settings down to the level of detail drawn on the eyes of playermodels. This can be great, especially for advanced users; however, customizing to the tiniest degree may be difficult if you’re not an advanced user, and even a veteran CS player may find there are settings available they weren’t yet aware of. In this comprehensive guide, I’m going to cover the topic of Source Netcode and–more specifically–how it ties into your Counter Strike: Source experience.
No doubt, if you’re taking the time to read this exhaustive guide you have played CS in some form long enough to have heard someone complain about “rates” or “interp” or “lerp”. But what does all the jargon really mean? To move forward, let’s first take a look at Source Multiplayer Networking (commonly referred to as netcode) and see exactly what the game engine and server are doing while you’re juan deag’ing each other in the face.
The Server
For most intents and purposes, the game server is quite literally the judge, jury, and executioner. The server generates the world in which the game takes place, and also takes data from each client connected to it, then uses the data collected to determine which actions will be true in the world–as well as which actions will be false (read: who shoots who).
The server builds and updates the world in time steps which we’ve all heard of, these steps are referred to as ticks. The timestep works from a baseline of 15ms, which equals out to 66.66 ticks per second being processed–the math doesn’t much matter for our uses here, and I won’t pretend to be qualified to teach it–suffice it to say that the default tickrate of a CS server is 66. Many servers run mods to allow them to use a tickrate of 100, which effectively increases the amount of user data processed, at the expense of CPU power on the serverside. Basically this means you are going to be processing 34 more ticks per second than a standard server, which will require more server CPU. If the server CPU cannot keep up with the data cascading into it, than you will begin to notice your gaming experience degrading. If you manage a server and you’re receiving complaints about performance, server CPU is quite often going to be a great place to start looking.
This process of managing the ticks is rather involved, and since it’s being done over a network in real-time, there are instances in which lost data can translate into poor in-game performance. Attached is a graphic provided by Valve, showing the basic workflow of traffic from the server to the client.
The server will affect changes to the world, and the objects within the world, according to data sent and received from each client in real-time. Being that every client is different in terms of PC specs, internet connection (ping), and distance from the server (latency), it’s important for each client to tell the server how much data to send and receive. This is what we lovingly refer to as rates.
Rates
There are actually a few commands that fall under the same category as rates, and knowing what they mean isn’t necessarily as important today as it may have been when CS:S was the most recent iteration of Counter Strike. Modern gaming PC’s as well as exponentially faster Internet connections can (in most cases) mitigate any inherent performance flaws in Counter Strike, but it’s still important to understand how your client-side values can impact performance.
Setting the variable rate in your autoexec, config, or console, will tell the game server the amount of bandwidth you have available to download data. This is given in a bytes per second value, so a rate of 100000 would be around .76 megabits per second (Mbps is what most Internet Service Provides alot your bandwidth in). With modern high speed Internet connections, most everyone should be able to afford .76 Mbps of bandwidth for gaming–so for the sake of ease we will assume a rate of 100000 for this guide. This value is important, because the server will use it to dictate how much traffic you can receive. Setting your rate too low (or too high) can result in data loss.
To adjust your rate, adjust the following variable via console or autoexec.cfg in your game. Click here to view my Guide on Autoexec.cfg files.
By requesting a higher snapshot rate from the server, you can ensure you’re getting as much data from the server as it’s processing, which will keep your client as close to the world as possible. Changing cl_updaterate to match the server’s maximum allowed updaterate value, you will do exactly that. For example, in NBK we’re using 100tick servers, so a correct value for this would be cl_updaterate 100.
To properly adjust your rates for a 100tick server, set the following value:
Clients on the server (read: players) will be sending user commands to the server as well. This value can be adjusted with the variable cl_cmdrate. These are snapshots send from your machine to the server, and they tell the server what you are doing. Did you move, jump, or shoot? This data will be send to the server via your cmdrate, and if you aren’t using a correct value, you may suffer a disagreement between yourself and the server; unfortunately, the server has final say in what happens, so you’re best bet is to adjust this value to match the server’s maximum. Again, this being a 100tick server, you would want a cl_cmdrate of 100.
To properly adjust your rates for a 100tick server, set the following value:
Netcode and Networking
In an effort to keep this somewhat brief and at least mildy entertaining–and to avoid the necessity of a degree in Network Engineering–I’ll try to broadstroke this topic in a way that is easy enough to understand the concepts, while still explaining how Counter Strike Source’s netcode impacts what you see on your screen.
Speed is king. Getting your data to the server (and back again) is of the utmost importance in FPS gaming. Generally speaking you are best served to have a low latency to the server you’re playing in; personally, I try to play in servers where my latency is less than 70ms. It’s important to note that ping, latency, and bandwidth are all different. Each one of these is a factor in your Internet speed, and overall online performance.
Bandwidth, which is allocated by your Internet Service Provider (ISP), is basically the size of the pipe in which your data will travel. More bandwidth means a bigger pipe. For example, a connection of 30 megabits per second (30Mbps) is offering you a much larger pipe than an 8Mbps connection, which means you can potentially send/receive a lot more data at a given time.
Latency is a measure of time, as it’s name suggests. The value displayed for your latency in-game is the amount of time it takes for data to get from your machine to the server. Ping is defined in a host of different ways across many platforms, but in the context of Counter Strike, ping is a measure of the time it takes for data to travel round trip between the client and server. Knowing this, we can see determine that ping and latency are not one in the same–nor is either one a true indicator for performance. To see each player’s latency, simply view the scoredboard in-game. To see the ping of each client in the server, simply type “ping” in console.
Latency as shown in-game:
Ping as shown in-game:
This doesn’t mean that a player with 8 latency will always kill a player with 110 latency, because of the other factors we have already covered. As an example, let’s just pretend for a second we have two players of equal skill playing with the same weapon and the same PC hardware and game configuration. Assuming each player in this scenario has the exact same specifications, and has set all of their client-side values identically, there is nothing else at work here aside from their individual latency and ping in relation to the server.
- Scenario
- Player 1 receives info telling him an enemy is coming to the area he is watching
- Player 2 suspects there is someone in the precise location Player 1 is holding
- Player 1 has a latency of 5 (ping 34)
- Player 2 has a latency of 75 (ping 118)
- Player 2 quickly peeks out and fires a shot into Player 1
- Player 1 fires a shot the exact moment he sees Player 2
- Player 2 dies a horrible death and sees Player 1 run away with his limbs intact
- Player 1 posts an annoying bind in chat because he’s a jerk and that’s what jerks do
Now, in our given scenario, no one player had any advantage in skill, PC hardware, game config, or defensive location. This is just an example showing how getting data to-and-from the server in a timely fashion can be the difference between life and death in a very specific scenario. This is by no means an all-encompassing scenario, and due to limitations in computing and networking, we can’t avoid every single instance of this. But this can give you a rough idea of how that data transfer from client to server can make or break your experience. You could argue that understanding this concept would allow you to play smarter in certain scenarios, particularly if you’re taking note of your connection to the server compared to your opponent’s. Do you need to have a full understanding of these principles to become a better player? No. Of course not, but the more information you have stored in your grey matter, the better equipped you are to win.
Scenarios similar to this one are where most often you’ll hear someone say “no reg” or “fix your rates” or blame generic “lag” as an excuse for their death. Since you’ve read this guide, you’re well-versed in the settings of Counter Strike, and you’ve already tuned your game to offer you the best possible experience you can…simply smile and let the other player ramble on about things they don’t know about. It’s better this way.
The fact of the matter is that the server can–and will–make mistakes. Data being sent over the Internet is subject to errors–plain and simple. You can’t force the server to see things the way they are on your screen, but using the proper settings (and understanding them) will allow you to see things closer to the way the server does, which for many folks will translate into a better overall gaming experience.
How it works in-game
A true explanation of “how it works” from A-Z would be more than anyone would be willing to read, so I will do it quick and dirty for you…just the way you like it.
As a client (read: player) your PC renders the game in units called frames. You’re probably familiar with terms like FPS, or Frames Per Second. Each frame is rendered by your PC’s hardware, and impacted by your input from the mouse and keyboard, and shipped off as a snapshot to the server.
The server then takes your snapshots and renders changes to the world based upon the data within those snapshots. This can get a little confusing when you start considering the fact that the server is simulating the environment roughly 100 times per second (for 100tick servers) all the while affecting the world (in increments) based on data received by clients.
It’s pretty easy to imagine how dicey this could get if the server was sending raw data on every entity to every client. The amount of data would crush the server, so instead it sends a form of compressed data based around deltas (changes) to the world. Only things that change the world are worth sending and receiving, so the server sends these deltas to players as needed. There would be no benefit to sending data to every client about, say, a couple of barrels that are sitting in a location on the map if they aren’t being interacted with. This compression method serves as a way of only passing mission-critical traffic. In short, it only tells you what you need to see. Nothing else.
The enemy with all things in fast paced gaming is time. Just like a low latency is essential for getting your data to (and from) the server in a timely manner, the amount of time it actually takes plays a role in the experience for the player. This concept is summed up best with interpolation.
Interpolation
The server is sending and receiving 100 snapshots per second in 100tick servers, and because of this we can see a smooth gaming experience because the server can more or less go back in time to render things properly. The amount of data provided in the ticks gives the server a bit of wiggle room, basically, to make myriad adjustments within the world. With all of that client data streaming in, the server needs to create something of a middle ground between player positions and how they are interacting with the world.
Think of it like this: if you fire a bullet into a barrel, you see the round make it’s impact nearly instantaneously because the frames are rendered locally on your PC. It makes sense that you would see it in real time–but what about another player watching you? They couldn’t possibly see it in the exact same time as you, since the server has to render the world delta (bullet impact on the barrel and any effect it has on the world) then send it to the other clients. This takes time, and the server uses interpolation to smooth this experience between all players in the server. Those 100ticks per second allow the server to make the necessary adjustments and sent the data all in sub-second timing so it appears smooth to players. The graphic below can be slightly confusing, but look at it in the context of action on the server in less than one second.
From 10.15 seconds to 10.30 seconds, the server is able to receive, render, and send data back to clients so that bullet impacting a barrel can show up in the world without jitter or other visual artifacting. Again, this is a quick and dirty explanation of how the server averages what you see versus what other players see. This difference in time between what you see and someone else can see can be tweaked to a certain degree by the variable lerp.
Default CS:S Interp Settings:
LERP
First and foremost I want to mention that the default LERP value of 100ms is in place because it should yield the best visual experience for players. Since the server is interpolating every frame so [most] players see [roughly] the same thing at the same time, it uses this default value. Adjusting this variable will subject you to potential lag, jitter, or a potentially less smooth gaming experience; although, it is preferred for competitive play to use a lower LERP value because it can offer a snappier and more realtime experience for shot-placement.
Lowering your LERP value (in a perfect environment) will allow you to see a little closer to what the server is seeing, for lack of a better example. This doesn’t mean you have an advantage over another player per se, but it may help to cut down on those instances where you peek quickly from behind cover and die a step or two after you’ve returned to cover. There is less interpolation time to smooth the experience, and this can make it feel slightly more accurate, albeit at the potential expense of jitter or other slight anomalies. In summary, a lower LERP should help lower some of the interpolation the server does, and provide a more instantaneous experience for the gamer.
Here are my adjusted Interp values, and a bind to control LERP on the fly (from spec). Go into spectate, and press the – or + key to change your LERP value to something that feels right to you. Generally speaking you want to get as low a LERP setting as possible while keeping the net_graph display of LERP white in color. Yellow is acceptable, but orange will indicate you should begin to adjust in the other direction.
- bind = “incrementvar cl_interp .01 .1 .0025”
- bind – “incrementvar cl_interp .01 .1 -.0025”
- cl_interp 0.02465
- cl_interp_ratio 1
Conclusion
I realize this is a lot to absorb, and if you have any questions or want more explanation on the topics I covered don’t be afraid to ask. Reply directly to this guide if you’d like, or message me privately on Steam.
I’ve done what I can to make this easy to read–but I realize it’s still nearly as long as the New Testament, and not everyone is going to be inclined to sift through it all. That being said, I’ll attach a copy of my autoexec.cfg to this thread so you can download and use/test/modify it if you’d like. Feel free to modify it, scrap it, or completely dissect it for the pieces you find useful. For the purposes of this guide, I think the most helpful section will be the binds to change your LERP. Users who have never changed any settings will most likely be using the default Interp values, and will want to change those to match the ones I listed above. If you’re unfamiliar with autoexec files, please check out my Steam Guide to Autoexecs for some help there.
Hopefully this will clear up some questions, and get you in the server feeling a bit more like you know exactly what the hell is going on when people start complaining about rates, lag, interp, and the like. Thanks for reading, I appreciate your time.
Cheers,
Jimmy