Cingular’s HCD Labs

On my way to a LAN party last night, I stopped by the CHI (Computer Human Interaction) meeting at Cingular Wireless’ Winward location. Her majesty works in Information Architecture there, currently in the HCD group (Human Centered Design) so I managed to chill for an hour. I got a tour of the 4 main rooms they utilize for user testing as well as a good description of their mobile unit.

The first room had like 7 monitors, all viewing & controlling a computer in the other room. One showed you the desktop as the user saw it as well as seeing everything the user was doing. It had 3 cameras in the user testing room that were on the ceiling, masked to not be noticeable and subtle as to not distract the user. These cameras were controllable from the main room, and you could zoom all the way onto a phone interface.

The user testing room had 2 microphones with great pickup, and the testers could talk so the user being tested could hear them on a seperate speaker. The actual testing room itself was comfortable, office like, and spacious; you wouldn’t know you were being studied like a lab rat.

Apparently, the computer in there is not all they use; sometimes they put kioks in there, or actual phones. One of the rooms looked like a mailroom, except instead of mail, there were phones in the slot. Highest it went for Nokia’s was like 6630; 6681 wasn’t anywhere in sight. Japan is so frikin’ ahead of the US.

The other room was merely for watching with a huge monitor to see what the user was seeing with a small part of the screen showing the user themselves.

The mobile unit is actually like an RV with 5 computer stations in it. They wire up the sales staff in a store with mics, and put cameras throughout the store that aren’t very noticeable, and watch for a day.

In the seminar room, they had a few posters showing before and afters; one for a website, and one for a kiosk. They had notes showing what the user said, what was changed based on that feedback, and why. Users’ say the damndest things.

All in all, insane the amount of work that goes behind user testing, and re-evaluation + multiple iterations of interfaces.

AJAX & The Alternatives

Pete Freitag linked to this article; JD has some coverage here, and Leo Bergman echoes my vendor independence contentions.

I thought it was well written and agreed with what Alexei White said in regards to why AJAX is more hyped and talked about than Flex/Flash… well agreed if he had wrote that 12 months ago; the skillset argument doesn’t hold water anymore. I’ve always been frustrated that developers talk more about AJAX this past year vs. Flex & Flash when we do some many things better, and have been for a lot longer.

Is Flash hard for developers to learn and utilize? Damn straight, that’s why I was employable for so long. It’s also one of the many reasons Macromedia made Flex and removed that barrier. According to his article, we now just have to fight off “Vendor Independance”… but… I like Macromedia!

The vendor lock-in argument never made any sense to me. I think Patrick Mineault is very talented, and has done an extremely fantastic job with AMFPHP, an open-source alternative to Flash Remoting for PHP. He goes on vacation, or gets p@wned by border patrol again while I’m in the middle of a project, I still have the “email list” to help me through my project, right? And can we all be sure he won’t repeat the history of AMFPHP, and let it languish for another 6 months…? In all fairness, Patrick learned his lesson with BP, and is always improving AMFPHP and asking for the community’s input, so my example isn’t true of what he is truly about, or what AMFPHP is, was, or will potentially be. I’m just using him as an example of an open-source project I know of.

Still, no thanks. Paying someone to build, maintain, and improve on a technology that works seems perfectly logical to me, and my skillset doesn’t suffer merely because I use Flex vs. Tapestry to develop web applications. I must not understand what vendor lock-in is. I use AMFPHP because it works, I like using it, and I need it for work and side projects. That’s it.

However, the greatness of blogs shows through, and the comments in Alexei’s post nip at the veridity of it, and help give some corroboration as well as counter-points. David Mendels from Macromedia quickly corrects the skillset misnomer.

Additionally, his blog is aptly titled “AJAX Info”, where mine is all about Flex and Flash so we’re both subjective. Furthermore, I’m somewhat of a technology bigot because while Macromedia has been cool publicly about saying “It’s not AJAX vs. Flash, but AJAX and Flash”, I’m like so not down with that.

I really don’t understand what AJAX is appropriate for technically, and if someone says “This is an appropriate usage right here… now will you code in AJAX?”

I’d reply, “Nope, I’ll just go find someone to pay me to code in Flex on some other project.”

Regardless, I like reading articles that attempt to compare and contrast technologies, cutting through hype, ecspecially when those articles are written by someone in another area of technology than I am. Specially blog posts since you can get greater context reading the comments which are not deleted because they oppose the writer’s view(s); well done.

In conclusion, I still think one thing programmers never mention is that our stuff, Flex & Flash, has the opportunity to look better, more customized, and true to a company’s brand. Although my wife’s awesome with a DIV, I can own with a MovieClip.

How To Prevent Getting Screwed by Alienware

Really only 3 things to do, although, not easy.

  • Upon receival of your order, ensure ALL items on the packing slip are included. Missing a Windows XP CD? You have 14 days to tell them, or tough $hI0t.
  • If anything doesn’t even remotely work perfect within the first 30 days, return it to get a refund. If you don’t, you won’t like the next item.
  • Keep calling until the problem is resolved.

That last one sounds pretty brief, and it is. It requires more explanation.

Alienware’s support system is pretty simple. Support on the other line MUST determine the problem with your laptop remotely. That means minutes to hours on the phone, with you phsyically following the directions from the support person on the other end. Rebooting, tweaking the bios, etc.

There is no “I want to FedEx this laptop to you guys”. You either deal with them on the phone to determine/resolve the problem, or your computer will remain broke, period. It’s black and white.

Once they find the problem, they’ll send you a replacement part, have you physically ship the system in to them to fix it, or if you ordered a special plan, they’ll send someone on site.

Again, NONE of those 3 things will ever happen unless you resolve it on the phone. My wife made 2 calls, and got cut off the first time.

I made 3, after getting cut off the 2nd, but dammit, I resolved that mofo!

Our personal experience: 2 months, 3 phone calls, and 7 hours worth of phone calls later, we found out what’s wrong with her majesty’s Area-51 (7700) laptop. 1 of the RAID drives went bad apparently. They are shipping a new replacement drive, and I have to ship them the defective one to remove an $80 charge on my credit card. I managed to sweet talk the guy on the other end to send me a Windows XP CD to install Windows since my copy here for another computer doesn’t work. We never got ours in the shipment (see list item 1).

If my putting the hard drive in doesn’t fix it, it’s off to the phones for another 3 hour remote-hardware debugging session.

If you buy an Alienware, be aware they assume you’ll either fix any hardware or software problems yourself, otherwise, you HAVE to deal with their support staff. Their support staff that I talked to are nice and patient mind you, I as a full-time contractor, however, don’t have time for that crap. Time is money for me, and support calls cost me more money than Alienware.

After reading Ray’s on going battles with Dell support, apparently the competition isn’t any better. Anyone know of a computer company’s support options that DON’T suck; mainly, I can just send them the computer and let them fix it? Until I give up my career in software and start learning hardware, I don’t have time to make support calls.

???

Latency is 4-eva’

I filled out a survey this morning sent to me by college kid from Ireland, Patrick Wilson (email him requesting the survey to help him out, and remove ANTI SPIZZAM from the email address before sending), about multiplayer-online games. Either that, or Word just p@wned my comp full of macro viruses.

Anyway, interesting questions on it, and one made me really think. It asked if you think people would pay for infrastructure to remove the latency experienced in Massively Multiplayer Online Games, or would consumers just continue to suffer to pay lower prices.

Naturally, I responded damn straight I’d pay, kill all lag!!!

He finished with do you think latency will go away?

Lag go away? No way! Even if we all had G8’s (T3’s on crack or whatever those G things are called), would your ping ever be under 10? And if so, is that really the problem?

I mean, here’s the current scenario for FPS (first person shooter) like Counterstrike (these aren’t the actual algorithms, just how I’d code them):
– I click mouse
– mouse sends message to window’s message pump
– CounterStrike gets the message when CPU processes said instruction
– CounterStrike then determines client-side if a hit is probable
– CounterStrike sends event packet to server
– event is processed down the networking stack to a packet
– packets fly across my DSL phone wire and hop across various servers until they reach their destination
– upon re-assembly, the message is sent to the CS server
– the server determines who the event affects (receiver of shot, others seeing me shoot, me getting results of my shot)
– server fires off necessarey message after confirming event package state matches game state enough to designate a shot and a hit
– events do the reverse heading back to clients
– messesages bounce back across the wire hopping on various servers
– at my comp, reassembled, and CounterStrike gets the message and updates screen

Now, EVEN IF the networking gap in this equation is removed, is latency still there? Yeah. There are too many components involved that take time. The event described above is resolved in less than a second with a good ping, but that’s still latency; not real life time frames.

At what point do all the components above become real-time, or when that time comes are we not using the components above?

I think latency will always exist, at least in computer/console games. The only ones negatively affected by it are those faced paced FPS, but the MMORPG’s are not as long as the connection is continous, and latency remains at a stable level. Compensating for networking latency by tweening characters’ movements, auto-performing chacter attack actions, and queuing up events to ensure the experience are all great compenstation for lag.

So to me, the question is, not will latency ever go away, but rather, will latency ever stop being an issue?

In playing mutliplayer games over the years, I’ve seen drastic improvements in latency compensation; improving the lag experience. Developers of games are getting better at defining parameters of what are show stoppers and what are acceptable boundaries, even leaving a lot of that up to players.

For example, most games nowadays have the clearly written rules of latency caps, meaning if your ping falls below a certain threshold a set amount of times, you are booted. Since a client’s participation also adds a level of latency because it is one more non-instaneous client to keep track of, you end up with the weakest link scenario; those with the lowest ping have the possibility to ruin the game for others.

Different games handle this in different ways. In the early days during Starcraft, an online stratedgy game, almost half the games had suffix’ in their names “_Broadband Only”. If your latency indicator, 5 green bars was a color of yellow or red, you were booted even if it was the person hosting the game causing the problem. Racism based on color took a whole new twist.

Now, it’s by the numbers for most PC games. Until consoles get to that level of g33k maturity, you’ll still see the gauges and bars to showcase the very simple concept of network reliability which is a new concept for some people. Even with broadband, things aren’t perfect.

You have a higher ping, the more you are stigmatized. If it starts high but goes low, or is even remotely non-stable, you are mistrusted. Although the moniker of “LPB”, or low ping bastard, was immortalized in early games like Quake, Unreal, and HalfLife, it does in fact garner a lot of respect with it. By low ping, you have a better chance of doing well, mainly because the game gets quicker, and thus usually more accurate information about your game state. Did you really dodge that bullet? If your ping is lower, probably so.

It’s not really about rich or poor though; most are circumstance. I have DSL for instance, but the connection sucks, and is unreliable.

These stigmas, however, do not transition to MMORPG’s as much, mainly because those games do not require extremely low latency, but rather, stable connections. Dependable is more important than responsive.

Games have made great strides to softening the latency blow by pausing the game if it gets really bad, or allowing you to still move and interact with the world and only pausing as it waits for the response from the server for those necessarey actions, thus not negatively affecting your gameplay.

However, some still are built more to the low-latency connection settings. It’s a lot like how newer software, rather than being more optimized for the hardware, instead actually depends on the hardware evolving to be better so it can run it’s new features that require more resources. Windows’ Vista (aka Longhorn) is a perfect example. Not many computers today can really run the beta well enough to get the true benefit of a hardware accelerated OS (unless your a Mac g33k).

As such, that says to me the same expectations of connections are expected to improve just as hardware does.

Now, this attitude towards hardware has brought forth renewed interest in interpreted languguages for example. Formerely criticized for their slow speed because they are not actually compiled to machine code, but compiled either just in time or on the fly (or some higher level bytecode). Now that hardware has improved, people can look past their shortcomings in speed… because their shortcomings no longer exist, or not enough to adversely affect anyone responsible.

As such, will networking continue down the same path? Will connections get so fast and reliable, that games take advantage of that fact? It’s following the same path as hardware, so as far as the developers are concerned, apparently so.

I still think there are too many factors involved to truly remove latency, not just the networking components, but the hardware and software.

Even if things used quantum computing or some other faster than light computing power, there is still one problem… people.

People communicate via computers, and communication is an inherently flawed process. People deliver a message, and this message goes through 2 filters; the sender who is delivering the message interprets it, and the receiver has to interpret what the person said. These 2 filters change the message’s original intent, changing what it originaly was. These 2 flaws additionally take time.

So, even if hardware, networking, and software are instantaneous, people communicating and interpreting information are not.

…still, the thought of lag becoming history is just awesome. I’m sure it’ll re-surface when I try to play Quake 27 with someone who lives on Pluto and I’m on a wireless-gravity packet accelerator (uses Sun’s gravitational pull to slingshot packets to their destination) or even a hyperspace satellite (drops into hyperspace for faster than light travel to it’s destination orbit to more quickly deliver information). It’s all good; striving to be more connected is what the Information Age is all about, right?