Blog

  • Multiple Vibes: Vibration Studies?

    Intro

    The way in which devices interact all are bound by the 5 senses we humans have to interact with our world. Screens display things visually for our eyes, speakers emit audio alerts as do phones for our ears, and keyboards and force feedback joysticks for our sense of touch. Our sense of smell and taste, somewhat intertwined, are not utilized much.

    I think in the main 3, we’ve gotten pretty good and knowing how to best utilize the senses via our various technological devices. The screen shows text so we can clearly see it, we can adjust the brightness to compensate for lighting levels, and image compression technologies take advantage of the fact that there are thousands (millions?) of colors we cannot see, therefore, they are removed from the image to save in file size.

    Audio

    Audio is the same way; speakers have volume controls, sub-woofers not only help enhance realism, but also have evolved to help the deaf. I even saw a Flash site once that with audio told you where to click to navigate… the whole site was just a blank, black Flash movie. Certain sounds have become a part of our lifestyle, and from them, we know what they are, what they mean, all by their distinct sounds. Some forms of pitch and intonation are used to evoke emotions and or illicit a response. Just like a crescendo in a song builds, so to can a simple, 2 syllable sound song invoke a feeling that the computer is asking a question, or has completed a task.

    Touch

    Touch is getting there. I think there are a lot of neat boundaries we are pushing. The whole virtual games are getting cooler and cooler. From the adult world, those “sensation suits” will hopefully be adopted to the gaming world, so like force feedback joysticks, when you get hit in game, you’ll “feel the punch”, or “experience the rush of the explosion”… things like that.

    The whole point of this post

    I feel a vibration at my hip. I grab my Cingular Text Pager. No new message. It’s my phone instead telling me I have a voicemail. They are in the same vicinity, and thus, it is very easy for me to get confused on which one it is vibrating.

    Couple that with the proximity to my actual skin; loose pockets, or a thicket leather jacket I wear in winter, it makes it difficult for me to actually “feel” the phone ringing; to feel someone is calling me.

    Given some of the studies done on touch I learned about in college, some people’s sensitivity to touch is based on their experiences with it growing up. This was psychology, so they were mostly talking about Freud’s theories, and how much people were comfortable with touch. If you didn’t experience a lot of affection as a child, then later in life, you were more likely to feel uncomfortable with people touching you vs. if you had a lot of affection. This is cultural, too, because some families are just not affectionate, while others are. Some touches have associations built that are positive, whilst others negative. It’s pretty complex, but to me, you can easily find the sources/causes.

    Therefore, I’m not really sure you could quantify a stereotype/generality about what vibrations imply what. Therefore, I just look to music. For instance, something fast and quick implies urgency. Something constant, and then building up intensity implies someone/something wants your attention. I guess I’m not sure, if I were an engineer, how would I differentiate the vibrations between devices as well as ensure the experience is correct… enough so to be marketable and/or have a real business use. For example, ring tones do make money, thus, there is a reason to invest time in their effect on users.

    Conclusion

    …all I know is, via vibration, I’d like to know which device is telling me it has a message; my phone or my text pager. Vibrations are quiet, but get my attention more than anything since they are so personal.

  • Layout p@wn3d

    We have this Human Factors guy working here. Name’s Shayne. He did this wireframe for some of the Pod’s I’m building. So, we get together to discuss my current work and to ensure I’m hitting his vision on how things are designed. He’s not technically doing design here, but he pretty much gave my team the Illustrator design to use, both for layout and look and feel as well as the wireframes beforehand.

    We started dicussing the layout of the pods, with their use of the design elements and text. Within an hour, I had been schooled on layout, use of fonts, and textual elements. I had pretty much gotten a college layout class, in less than 30 minutes, for free.

    Bad news is all of my component’s size functions are now pretty much needing to be rewritten. The good news, however, is I now know how to do them 90% right. He says I won’t get all of my margin/positioning rules right the first time, nor have I got all of this figured out yet. I have 2 pages of his notes with which to use as reference; some on the back of wireframes, the other on a graph paper I ganked from my manager. Good stuff. Makes layout using coding harder, but in the long run, a lot better…looking.

  • ATL Crushed Grape Mix

    Via my man, Jonny Bag of Donuts.

    It’s late. I’m tired, drinking multiple cans of Red Bull to stay awake. Best client I ever had has given me a list of client changes. I need to finish them now since they were due 10 years ago. I start to feel sorry for myself…

    …then I get a belated birthday present via email. No matter how bad off you think you are, there is always someone else who has it worse. Laughter is the best, yo. Thanks Jon!

    The Atlanta Grape Tragedy

    The Remix

  • Faster Compile Time / Test Movie via SWC

    Werkin’ on a project this weekend where separating the audio to a level wasn’t an option because I needed the code to be simple. It was a simple slideshow with synced (streamed) audio, and the deadline was tight. The problem, though, was each test movie took a long time. Eventually, I’d change the audio compression to RAW just so the compile times were better, but a 4 meg mp3 even at RAW compression still took about 10-20 seconds on my 800 p2.

    I managed to get the main interface and all it’s parts to be consolidated into one SWC. This was really nice as I could reuse it in other movies simply for it’s interface, since unless you setup event listeners, it didn’t do anything.

    Frustrated post project on how to make things more efficient, I tried to do some SWC tests this morning with audio. First thing I forgot was that SWC’s, are in essence, SWF’s. Flash will merge them with the SWF your compiling too. Since an MP3/Audio file that is merged with the timeline, and only the timeline won’t export into your final movie. Thus, it gets compressed twice. So, I set its linkageID, and tried that. Even though the same 1 meg SWF went from 3 minutes (mp3 64 bit Best) to 3 seconds, the audio kept getting recompressed. Not really sure why, since I set the audio via the mp3 itself in the source FLA, and didn’t have override audio compression settings checked in the Publish settings.

    At any rate, what I did conclude is that for anything I can consolidate into an SWC greatly expedites my development time, whether for programming projects or others. Besides, most non-programming projects have a plethora of library media of graphics, bitmaps, text, ect, and it’s nice to have just one symbol to deal with. The time savings in compiling from using SWC’s for non-programmatic content is definitely worth investing time in to learn and test them out. Multiply 3 minutes by how many times I typically do a test movie in a typical development session, and then change that number to 3 seconds… you can see how much more efficient you can get if you implement this solution across the board.