Latency is 4-eva’

I filled out a survey this morning sent to me by college kid from Ireland, Patrick Wilson (email him requesting the survey to help him out, and remove ANTI SPIZZAM from the email address before sending), about multiplayer-online games. Either that, or Word just p@wned my comp full of macro viruses.

Anyway, interesting questions on it, and one made me really think. It asked if you think people would pay for infrastructure to remove the latency experienced in Massively Multiplayer Online Games, or would consumers just continue to suffer to pay lower prices.

Naturally, I responded damn straight I’d pay, kill all lag!!!

He finished with do you think latency will go away?

Lag go away? No way! Even if we all had G8’s (T3’s on crack or whatever those G things are called), would your ping ever be under 10? And if so, is that really the problem?

I mean, here’s the current scenario for FPS (first person shooter) like Counterstrike (these aren’t the actual algorithms, just how I’d code them):
– I click mouse
– mouse sends message to window’s message pump
– CounterStrike gets the message when CPU processes said instruction
– CounterStrike then determines client-side if a hit is probable
– CounterStrike sends event packet to server
– event is processed down the networking stack to a packet
– packets fly across my DSL phone wire and hop across various servers until they reach their destination
– upon re-assembly, the message is sent to the CS server
– the server determines who the event affects (receiver of shot, others seeing me shoot, me getting results of my shot)
– server fires off necessarey message after confirming event package state matches game state enough to designate a shot and a hit
– events do the reverse heading back to clients
– messesages bounce back across the wire hopping on various servers
– at my comp, reassembled, and CounterStrike gets the message and updates screen

Now, EVEN IF the networking gap in this equation is removed, is latency still there? Yeah. There are too many components involved that take time. The event described above is resolved in less than a second with a good ping, but that’s still latency; not real life time frames.

At what point do all the components above become real-time, or when that time comes are we not using the components above?

I think latency will always exist, at least in computer/console games. The only ones negatively affected by it are those faced paced FPS, but the MMORPG’s are not as long as the connection is continous, and latency remains at a stable level. Compensating for networking latency by tweening characters’ movements, auto-performing chacter attack actions, and queuing up events to ensure the experience are all great compenstation for lag.

So to me, the question is, not will latency ever go away, but rather, will latency ever stop being an issue?

In playing mutliplayer games over the years, I’ve seen drastic improvements in latency compensation; improving the lag experience. Developers of games are getting better at defining parameters of what are show stoppers and what are acceptable boundaries, even leaving a lot of that up to players.

For example, most games nowadays have the clearly written rules of latency caps, meaning if your ping falls below a certain threshold a set amount of times, you are booted. Since a client’s participation also adds a level of latency because it is one more non-instaneous client to keep track of, you end up with the weakest link scenario; those with the lowest ping have the possibility to ruin the game for others.

Different games handle this in different ways. In the early days during Starcraft, an online stratedgy game, almost half the games had suffix’ in their names “_Broadband Only”. If your latency indicator, 5 green bars was a color of yellow or red, you were booted even if it was the person hosting the game causing the problem. Racism based on color took a whole new twist.

Now, it’s by the numbers for most PC games. Until consoles get to that level of g33k maturity, you’ll still see the gauges and bars to showcase the very simple concept of network reliability which is a new concept for some people. Even with broadband, things aren’t perfect.

You have a higher ping, the more you are stigmatized. If it starts high but goes low, or is even remotely non-stable, you are mistrusted. Although the moniker of “LPB”, or low ping bastard, was immortalized in early games like Quake, Unreal, and HalfLife, it does in fact garner a lot of respect with it. By low ping, you have a better chance of doing well, mainly because the game gets quicker, and thus usually more accurate information about your game state. Did you really dodge that bullet? If your ping is lower, probably so.

It’s not really about rich or poor though; most are circumstance. I have DSL for instance, but the connection sucks, and is unreliable.

These stigmas, however, do not transition to MMORPG’s as much, mainly because those games do not require extremely low latency, but rather, stable connections. Dependable is more important than responsive.

Games have made great strides to softening the latency blow by pausing the game if it gets really bad, or allowing you to still move and interact with the world and only pausing as it waits for the response from the server for those necessarey actions, thus not negatively affecting your gameplay.

However, some still are built more to the low-latency connection settings. It’s a lot like how newer software, rather than being more optimized for the hardware, instead actually depends on the hardware evolving to be better so it can run it’s new features that require more resources. Windows’ Vista (aka Longhorn) is a perfect example. Not many computers today can really run the beta well enough to get the true benefit of a hardware accelerated OS (unless your a Mac g33k).

As such, that says to me the same expectations of connections are expected to improve just as hardware does.

Now, this attitude towards hardware has brought forth renewed interest in interpreted languguages for example. Formerely criticized for their slow speed because they are not actually compiled to machine code, but compiled either just in time or on the fly (or some higher level bytecode). Now that hardware has improved, people can look past their shortcomings in speed… because their shortcomings no longer exist, or not enough to adversely affect anyone responsible.

As such, will networking continue down the same path? Will connections get so fast and reliable, that games take advantage of that fact? It’s following the same path as hardware, so as far as the developers are concerned, apparently so.

I still think there are too many factors involved to truly remove latency, not just the networking components, but the hardware and software.

Even if things used quantum computing or some other faster than light computing power, there is still one problem… people.

People communicate via computers, and communication is an inherently flawed process. People deliver a message, and this message goes through 2 filters; the sender who is delivering the message interprets it, and the receiver has to interpret what the person said. These 2 filters change the message’s original intent, changing what it originaly was. These 2 flaws additionally take time.

So, even if hardware, networking, and software are instantaneous, people communicating and interpreting information are not.

…still, the thought of lag becoming history is just awesome. I’m sure it’ll re-surface when I try to play Quake 27 with someone who lives on Pluto and I’m on a wireless-gravity packet accelerator (uses Sun’s gravitational pull to slingshot packets to their destination) or even a hyperspace satellite (drops into hyperspace for faster than light travel to it’s destination orbit to more quickly deliver information). It’s all good; striving to be more connected is what the Information Age is all about, right?

7 Replies to “Latency is 4-eva’”

  1. There was a show on TV a little while back examining Internet 2. They came to the conclusion that the biggest problem is latency. The problem meant that we will never be able to have someone in Europe play live music with someone in the U.S. and jam, merely because the speed of light (and therefore 100% efficient data transfer efficiency) isn’t even fast enough – superluminal theory not-withstanding ;)

  2. Aw crap… so, the only way it could work is if you sent me an IM, and it arrived to me before you sent it.

    I remember a thread on Slashdot about this, discussing planetary communications in Star Trek.

    Basically, either the computer had to have a cybernetic implant to be precognitive, knowing your every move before you did, OR, the messages had to be sent before you actually sent them so they’d arrive in the future making it real-time.

  3. Well, the distance between New York and London is approximately 5500 kms, so that to ping London at the speed of light you would have light to travel 11000 kms, at a speed of 300,000km/s… So that the baseline lag is around 35 milliseconds. That’s huge! Next time your client bugs you about FlashComm lag, blame it on the speed of light ;)

  4. Well……how would OC lines deal with this problem. Though very expensive we are talking about fiber optic lines and therefore the speed of light…right? I remember when I worked at AT&T in Atlanta before it became Comcast and their was a tech from Bellsouth who started to work at AT&T. He was saying how they did have some homes up North that they were running test Fiber to the Curve lines. Which basically means that instead of running fiber to an amp or something and then switching to coax (like cable does) they were running Fiber right to the outside of peoples homes or a neighborhood. He said it was expensive but repairing it was fairly simple and at the time the speed was insane. He said they had to some how buff the speed cause it was too much for the peoples modem and it was causing problems. Tall Tell? Beats me, but sounded realistic. I’m sure technology like that will help in future. Go here fore speed info: http://www.directglobalcommunications.com/OC3_OC12.htm

  5. A loooooooonnnnnnngggggg time ago I remember reading in some science publication how some scientist where theorizing about moving data faster then the speed of light using sound. I know how it sounds but I swore I read it!!

    -erik

  6. There’s some confusion as to what the theoretical limit of the speed of light means. There ARE phenomena that can appear to move faster than light: for example, if you would move an object at a spped near to that of light in front of a light source, the shadow line from a far away distance would appear to move faster than light. These are only apparent however. You cannot use this to send information.

    You have to be careful when you read science magazines in which they tell you that they found something or other to move things faster than light. For example you can craft an apparatus that will phase shift a wave packet in a fashion that if you measure the wave packet in from one end to the other it will appear to have moved faster than light. I remember reading about that in a science magazine myself. However in tiny print it was explained that in fact the apparatus incurred its phase shift by modifying the shape of the wave packet, and that the speed up was only apparent, and could not be used to transmit info. Science magazines are worst than tabloids in many ways.
    Sending something faster than light in a fixed time frame (whatever that means in a special relativity context) means causality breaks down. What that would imply on a high-level scale is largely unknown. But I think it’s safe to say that unless you’re a photon in an EPR experiment, don’t expect to break causality in your lifetime.

Comments are closed.