Software of Today Loves the Hardware of Tomorrow

16 hour day, and I still got mad energy. I’ll probably collapse in a quivering heap Friday… at least I’ll be at the pub then hopefully collecting on answering Flash questions bartered for beer. I’m a cheap date, so maybe I can teach the bartender how to go to the frame, or explain to the waitress the pros and cons of authortime placed components vs. dyanmically attached ones.

I’m messing with Central stuff, right, and I notice that testing stuff here at home just sux. It’s so dang slow here… but at work, she screams; both my Flash stuff I’m making for my Central stuff. It’s becoming eerily similiar to computer games (non-console). A lot of games come out for the 30% percentile of computer users; yes, that figggr includes geeks. Basically, if you are in the top people of high-end systems at the time the game is released and you can buy it, it’ll run very well; you may even be able to peek all of the game settings and still have it run very well.

For the rest of us who drive our comps into the ground, and then make fish tanks or stop animations out of the remains when they finally do choke, our games have 2 lives; most are half way playable, and then later play so well, it’s a totally different game.

I feel Central’s doing the same thing. I figure by the time 3.0 is out, comps will be fast enough that you’ll probably never hear someone bitch about Central runs slow or is a resource hog on their PC… only their cell phone or something like that.

The concept works great for game makers because it allows them to up the realism and depth (in my opinion) of the game. Yeah, a game isn’t anything without a good story, hence Final Fantasy 3 (America’s 3) being the best game every made (Super Nintendo), but you gotta admit Tron 2.0 blows away the original, both in graphics and in story depth. Interestingly, the original is still fun, but I have no qualms dishing out the dough for the new version because it is just that cool.

I think same goes for Central. Like, the only thing really holding me back from making 10 million Flash applications nowadays is time, not technology nor the IDE. I notice more and more that even the animations put out would NOT have run years ago; case in point, Try running that processor-ravager on a 400 p2. Yeah, sure…
…and yet now you can; you instead focus on the site’s experience vs. the performance getting in the way.

I see Central going the same route; I ignore any performance shortcoming when I go to work and use it. It’s amazing how quick I forget too… cause everytime I come home and open the same project, I’m like “DAMMIT!”

It also lends credence to the whole Flash ecology thing, and basing a lot of business on it; it’s only getting faster and more widespread. Pretty neat longterm planning and execution.

Longhorn’s based on that very concept. Make the shiot need the uber hardware.

11 Replies to “Software of Today Loves the Hardware of Tomorrow”

  1. Jestoro,

    I tend to think there isn’t a problem with slower hardware and Flash. I don’t think you have to code for 2morro. To me the problem lies more in the application code than the actual hardware. You can get things to be super fast, but you have to know what you are doing. This the problem that you are going to face in any language, not just Flash.

    Honestly, we should be working on 400 – 600 mhz boxes and doing remote compiles to a 3ghz box. That way when you are doing your viewing your swfs, you are viewing them at what your customer will view them.

    Now of course, this can all be argued about who your customer is. If you know all your customers are running 3ghz machines.. dont bother. However, if you work for a large distribution base, you better be checking it on slow computers, slow connections, etc. The one thing you can’t ignore is the end user.

  2. Most importantly, in regards to the best game ever made:

    EarthBound, FF7, Chrono Trigger and Zelda: A Link to the Past are all pretty high up there too.


  3. But Kenny, I would think that the powerage/modernity of the video card might have more to do with it. I’ve got a PIII 1000 and it works pretty good, but I have a year old fairly hopped up ATI card in it. My G4 Dual 866 works pretty good too – whatever that has in it. Of course I’m talking visual stuff. In general though I think that whatever code we’re using is handled fairly easily by the computer itself, the bottleneck is probably the flash player and the video card. Then again, I may be totally wrong.


  4. That is still hardware thuogh. Yes hardware is going to effect how well something performs, but when it boils down to it, you really need to concentrate on the code and the app itself. People were making beautiful games and incredibly fast apps way before a computer got to the speed of 400mhz.

    Why does the new player perform better than the old one? Better code underneathe. Same goes with any computer program. One of the biggest benefits to any programmer is to invest in studies of algorithm and data structures.

    I just think Flash is great right now. It was great in all its previous versions. Yes hardware will make it better, yes better players will make it better, but code for today and yesterday, because 2morro things are going to be completely different anyway.

  5. Kenny, sure you can improve your coding to make Flash go faster. But, in general, Flash is one of the slowest performing graphic platforms of all. Central is even slower. After you sqeeze every last drop of performance out of your code, Flash will always be slower than, say, Director. In fact you can be sloppy and almost anything runs really fast in Director. (Not that you should be sloppy.)

    I can’t stand “coding for tommorow”. It’s a slippery slope type of plan. Let’s see–several years ago I heard people say “if only everyone had a 486/66”. Then it was “woa, we get to target a PII 300!” No matter where the hardware is people will push it. Actually, some old stuff goes TOO FAST on old machines.

    Part of the trend (to always be on the edge) is because you want to offer something new or more valuable. But, part of it is a sinister plan from the like so Intel and Microsoft. Mainly it’s just us pushing the limits.


    P.S. I want to see Jesse’s work machine where Central flies.

    P.P.S. Flash 8 is supposed to be leaps and bounds faster than 7–not the normal incremental improvements either. Then do you think no one will continue to need faster and faster hardware?

  6. Code optimization has never found its way into my development schedule. Things have always just “worked”. Maybe I’ve tweaked some XML parsing, but that’s about it.

    Flash has always worked just fine; I think since Kenny’s been doing a lot of games, and has a C background (I think), that mentality goes hand and hand and compliments his development.

    Me, on the other hand, I want to make stuff look and feel cool; if it ravages your CPU, who cares, the experienece is what matters. It’s hard to have both, so I understand both angles… even the 3rd one from Microsoft. You can’t just make shiot work… if you did, I’d be out of a job, and no one would upgrade hardware.

  7. No, Flash 8 will implement features which will allow us to do more, thus use more hardware, and push the limits again… at least I will try!

  8. I don’t understand how you don’t run up against Flash limitations. Are you just naturally efficient?

  9. Typically, when I had problems in Flash MX, I took the stance that if my stuff didn’t run well, I was designing it wrong. Rather than try to tweak and algoritm, or find out why something was loading too long, I just designed it so it ran acceptable. Forest for the trees kind of thing. If it was too slow, I was doing something wrong; not like “it should work”… well duh, it should run, but it’s not, so what’s the pragmatic thing to do.

  10. Hmm. In every project I include a stage where I revisit the general programming approach to see if I can improve performance. Often I’ll do a test early on to make sure there aren’t any absurd performance hits. If you find performance is lacking you can’t just take another approach. I mean, how do you know the new approach will be better?

    I’m still surprised you don’t come across this often.

  11. I usually build a project from a series of tests. Some of the classes and layout and messaging I already know in my head, the rest I sketch out, which isn’t much. The majority of my time is spent testing code prototypes, like, loading XML via RPC, and making sure I understand how the class works, and how I should best integrate it with others that I may already know I want to build. I’ll then go build the things I either don’t fully know how to do, or do, but want to make sure it’ll work in the way I want to do things.

    Once all the little, 200 line testers are operational, and it’s easy to debug them, I then start building bigger testers; which are just combinations of my coding prototypes (XML-RPC class loading XML from LiveJournal using my Login Form). Does it work? Is it easy to debug? Can the rest of my work in this way? Does it feel right? Later down the road can I modify it? Extend it?

    I can only usually answer 70% of those questions. I only find out 20% more once I’ve reached near completion of a project. At that point, if I have time, I’ll re-write it from the ground up, using existing classes a little bit, but re-factoring across the board. Usually, the 2nd time around she “feels good to work with”, both in production, and when I have to start adding last minute weirdness from the client.

    As far as optimization goes, I woulda caught most of that stuff up front; the exception being having everything playing together, and then having to realize there are so many moving parts, it really comes about optimization of messaging between them.

    Sorry dude, I hope that helps better explain how I do things and what I run into. I think the difference is, I can re-write a lot of my core architecture really quickly while re-using some of the old stuff merely to modify my approach. Go RAD development…..

Comments are closed.