Skip to content
Features

Cross-platform game development and the next generation of consoles

By the end of 2006, all three of the next-generation game consoles should be …

Jeremy Reimer | 0
Story text

Introduction

The gaming industry has come a long way since its humble beginnings more than thirty years ago. From a time when people were thrilled to see a square white block and two rectangular paddles on the screen to today, where gamers explore realistic three-dimensional worlds in high resolution with surround sound, the experience of being a gamer has changed radically.

The experience of being a game developer has changed even more. In the early 1980s, it was typical for a single programmer to work on a title for a few months, doing all the coding, drawing all the graphics, creating all the music and sound effects, and even doing the majority of the testing. Contrast this to today, where game development teams can have over a hundred full-time people, including not only dozens of programmers, artists and level designers, but equally large teams for quality assurance, support, and marketing.

The next generation of consoles will only increase this trend. Game companies will have to hire more artists to generate more detailed content, add more programmers to optimize for more complex hardware, and even require larger budgets for promotion. What is this likely to mean for the industry?

This article makes the following predictions:

  • The growing cost of development for games on next-gen platforms will increase demand from publishers to require new games to be deployed on many platforms.
  • Increased cross-platform development will mean less money for optimizing a new game for any particular platform.
  • As a result, with the exception of in-house titles developed by the console manufacturers themselves, none of the three major platforms (Xbox 360, PS3 and Nintendo Revolution) will end up with games that look significantly different from each other, nor will any platform show any real "edge" over the others. Many games will be written to a "lowest common denominator" platform, which would be two threads running on a single CPU core and utilizing only the GPU.

All other market factors aside, the platform most likely to benefit from this situation is the Revolution, since it has the simplest architectural design. The PC, often thought to be a gaming platform on the decline, may also benefit. Conversely, the platforms that may be hurt the most by this are the PlayStation 3 and the XBox 360, as they may find it difficult to "stand out" against the competition.

These are bold statements, and I don't expect it to make it without at least attempting to back it up with a more detailed argument, nor do I expect it to go unchallenged. In fact, I reserved a section at the end of the article where I describe all the problems I could find with my theory. So the fullness of my argument can best be understood by reading through to the conclusion and would encourage readers to do that prior to engaging in conversation in the discussion thread.

I should also add that I fully expect all three next-generation platforms and also the gaming PC to survive and do reasonably well. The console wars will require at least another round after the next one before they have any sort of resolution. Ultimately, platforms themselves may reach a point where they no longer matter, as most content will be available on every gaming device. Our grandchildren may look at us strangely when we recall the intense and urgent battles between Atari and Intellivision, Nintendo and Sega, and Microsoft and Sony. At least we will have the satisfaction of knowing that we lived through the period when gaming went through some of its greatest advances.

Download the PDF
(This feature for Premier subscribers only.)

The runaway costs of game development

In 1982, Atari released a version of Pac-Man for its 2600 game console, also known as the Video Computer System (VCS). The game was written by a single programmer over a couple of months and had a total development cost of US$100,000. It was not a very good port, as the game flickered annoyingly and struggled to overcome the limitations of the VCS hardware. Nevertheless, it sold over 10 million copies at US$30 a shot, with a cost of goods sold of just about US$5.

In 2004, Microsoft released Halo 2. Over 190 people are listed in its credits, and the game took three years to complete with a total development cost of over US$40 million. The game did record sales in its first month, and has currently sold over 8 million copies at US$50 a crack.

Even after adjusting for inflation, the figures below paint a striking story. The cost of developing high-profile games has increased exponentially over the last few years, with costs for the next generation of consoles expected to continue this trend. Estimates have ranged from a 20% to a 100% increase in development costs for next-generation titles.

Development costs for various games since 1982
Source: Business Week; www.eurogamer.net; www.buzzcut.com; www.erasmatazz.com

While the combined income for video games has also increased dramatically since 1982, this has largely been a function of more games being available at any one time, rather than an exponential increase in the earning potential of a single hit game. Indeed, the examples of Pac-Man versus Halo 2 show that the earning potential, adjusted for inflation, of a hit game today is not significantly higher than it was in the Atari 2600 era. Of course, not every game in 1982 did as well as Pac-Man, but then again not every game released today does as well as Halo 2. Yet in the early days of video games it was trivial to find the funds to invest in the development of a new game. It is obviously significantly harder to find US$20 million to $40 million to launch a new hit title.

In the early days of gaming, developers would typically fund the creation of a new game themselves, then shop around the finished product to publishers, who would handle the mass duplication, packaging, and shipping to retailers around the world. In the early 1990s, this equation started to change. Origin Systems sold themselves to publishing giant Electronic Arts in 1992, in part to obtain enough funds to complete the development of the latest installment of their epic series, Ultima VII. The fact that Origin recovered the full cost of developing the game in the first two days of preorders wasn't the problem. The issue was keeping positive cash flow while game budgets continued to spiral. Origin wasn't able to raise enough money to continue funding the next generation of game projects, but EA was, and so EA swallowed the development company whole.

The motivations of game publishing companies are typically very different from game development firms. Game developers are largely motivated by a love of gaming and a desire to create a new project that outshines and out-innovates anything that has been done before. An interesting new game idea that happens to make a little bit of money is considered a success. Publishers are looking for large returns above all other considerations. The business model is to push for huge blockbuster releases and reap the windfalls of the surefire hits, then collect tax write offs on the projects that wind up losing money. The worst case scenario for a game publisher is a game that is modestly successful.

The publisher mentality tends to dismiss quirky new game ideas in favor of sequels and licensed properties from movies, comics and TV shows. This trend has been especially visible over the last few years. Out of the top 100 games in terms of sales, only 13 were neither sequels nor movie/TV licenses (source: USA; TRSTS). Electronic Arts published only one new game this year in comparison to 25 sequels.

Sequels and licenses are only one aspect of the ever-increasing demands made by publishers on the game development companies that they either finance or, increasingly, acquire. The other major trend over the last few years has been towards more cross-platform releases.

Cross-platform gaming

The idea of releasing titles on multiple platforms is by no means a new strategy. In the early days of personal computers, game development companies who had struck it rich on a game for, say, the Apple ][, would often contract out a single programmer to make ports for the Commodore 64 and Atari 400/800. The ports were not often high quality, but they would easily make back their costs of development.

The push for cross-platform gaming today comes from a different direction. Publishers no longer see ports as an option to do later if the original game is a success. Instead, the developer is mandated to release on all three console platforms, and often the PC platform as well, simultaneously. It comes down to a simple business calculation of maximizing revenue: if there are 75 million PlayStation 2 owners out there and only 20 million each of Xbox and Game Cube owners, it might at first seem sensible to write simply for the PS2. However, if you could target all 115 million potential customers at the same time, plus an unspecified number of potential PC game players, your potential income goes up significantly. This equation is bolstered by the fact that nonprogramming tasks such as art, music, and level design are becoming a greater proportion of the total costs of game development, and these assets can be reused between platforms.

Just as game development companies are yielding to the pressure from publishers to create sequels and licensed properties, they are also increasingly taking on the challenge of producing cross-platform titles. While successful and established studios like Naughty Dog can still choose a platform to focus on and stay exclusively on that console, smaller developers are finding that they must target multiple platforms in order to receive funding from their publishers. It is quite common today to see an advertisement for a new game from a lesser known development team and notice the plethora of logos at the end: "PS2 Xbox GameCube PC-CDROM." Such ads were not nearly as prevalent a few years ago.

Porting games to different platforms can be expensive and time-consuming, two problems that game development companies fear above all else. As a result, more and more effort is being expended to make this sort of development easier. Since 2001, there have been a plethora of technical articles at web sites such as GamaSutra explaining how developers can architect for cross-platform design from the beginning of the project, which makes writing ports much less painful. A good example of developers following this advice successfully was Eutechnyx's Big Mutha Truckers, released in 2003 for the PS2, Xbox, Game Cube and PC:

Big Mutha Truckers was Eutechnyx' first next-generation project and there was significant pressure to keep our existing engine and simply port it to PS2. However, the existing engine was optimized for the PS1 and not geared for cross-platform development. The decision was made to rewrite our game engine.

For those companies not wishing to rewrite their game engines, the marketplace has fulfilled that need by providing cross-platform middleware platforms. Over 500 current game projects are utilizing Criterion's Renderware 3D development platform, including Grand Theft Auto: San Andreas, Mortal Kombat Deception, Call of Duty: Finest Hour, Sonic Heroes, and Burnout 3: Takedown. Middleware companies tout the cross-platform advantages of their development tools as one of their strongest features.

As more and more companies find ways to improve cross-platform development, it becomes a self-reinforcing phenomenon. What was once thought optional now becomes a requirement, thanks to pressure from the publishers. However, the next generation of consoles are set to present their own unique challenges to development.

The complexity of next-generation architectures

The next generation of consoles presents developers with entirely new programming challenges, the most significant of which is the move from single-core to multicore CPU design.

Most people know about Moore's Law, and how it was used to predict the doubling of computer speeds every eighteen months. What many people don't realize, however, is that Moore's Law was about a doubling of transistor density, not speed. Semiconductor companies have been able to ramp up MHz with every process shrink, but ran into a bit of a wall at 90nm. Intel's claims of bringing the P4 core to 5GHz never came to fruition, and despite Steve Jobs' pronouncement, IBM's PowerPC 970 has yet to make it to 3GHz.

Faced with the fact that adding more transistors to the same core would no longer help it run at a higher frequency, Intel, AMD and IBM have all made the switch to multiple CPU cores on the same die.

Multiple cores essentially act as a second CPU. All modern operating systems have the ability to spread multiple tasks and threads over multiple CPUs automatically, so if you are running two tasks at the same time that would each use 100% of a single CPU when run individually, theoretically you should be able to run both with no slowdown. A single task with two threads that each demand 100% of a CPU's time should also be able to run them both, one on each core, in half the time as a single CPU at the same speed.

In practice, however, things are invariably much less straightforward. Most games today are still written to use a single thread, because it is the simplest programming model and because most hardware (both PCs and consoles) contain only a single CPU core. All programmers are very familiar with writing single-threaded code, but few are experts at multithreading. Threads are typically less resource-intensive than separate processes, because they can share code and data between them without protection from the CPU. However, unless effort is taken to make code thread-safe, one thread can manipulate data or memory while another thread is in the middle of executing it, leaving it in an unknown state. As Gabe Newell of Valve Software explains (video), this presents a problem when many programmers are working on a single game:

It's not like I was like lying awake at night saying I want to re-architect everything in multithreaded code to get it to work. So one of my junior programmers writing game code instead of system code can slow things down by a factor of 80, because they are doing something out in the AI or game DLL which used to be totally safe... really experienced programmers now have to say that "oh you can't tell that you're doing this wrong, it didn't used to be a problem, and there's no way you could tell it was a problem."

The other issue is that not all game tasks are parallelizable, meaning that they can be spun off into as many threads as necessary and have each one take up an equal amount of CPU time. Some tasks by their very nature must wait on others before they can continue, which makes the theoretical gains of 100% almost impossible to realize.

Even expert programmers can be confounded by multithreaded programming. Years ago, John Carmack modified the code for Quake III as an experiment in testing dual-processor configurations, by splitting off the rendering code into a separate thread:

It's actually a pretty good case in point there, where when we released it, certainly on my test system, you could run and get maybe a 40% speed up in some cases, running in dual processor mode, but through no changing of the code on our part, just in differences as video card drivers revved and systems changed and people moved to different OS revs, that dual processor acceleration came and went, came and went multiple times.

At one point we went to go back and try to get it to work, and we could only make it work on one system. We had no idea what was even the difference between these two systems. It worked on one and not on the other. A lot of that is operating system and driver related issues which will be better on the console, but it does still highlight the point that parallel programming, when you do it like this, is more difficult.

If a programming genius like John Carmack can be so befuddled by mysterious issues coming from multithreaded programming, what chance do mere mortals have?

Another indication that time pressures may encourage developers to follow a simpler threading model is the fact that most of the first-generation of Xbox 360 games are single-threaded. While subsequent generations of games will undoubtedly be better optimized, the pressure to ship will always be there, and a single-threaded game that works perfectly but is a little slower is infinitely preferable to a multithreaded game that runs fast but occasionally slows down or locks up for no easily discernable reason.

A closer look at the next generation architectures

Microsoft's Xbox 360 contains a Xenon CPU that has three PowerPC Processing Element (PPE) cores internally. Each core supports hyperthreading, which means it can execute two threads at once, although hyperthreading is not nearly as fast as having a separate core. Other than that, the design is fairly straightforward, with the CPU connecting to fast, shared RAM and the graphics processor, or GPU.

Sony's PlayStation 3 has an even more complex design. Instead of three PPE cores, there is only one. However, in addition to this single core, Sony has added seven "Synergistic Processing Elements" (SPE). Each one acts like a tiny CPU with 256k of local memory. Each SPE runs a distinct program, fed to it by the PPE. Data comes from an input stream, and is sent to SPEs. When an SPE has terminated processing the data, the output is sent to an output stream. Theoretically, clever programmers could daisy-chain the SPEs so that while one is processing the first chunk of data, the next is processing the second, and so on. However, it is more complicated than simply having threads automatically being allocated to separate PPE cores. Furthermore, not all tasks lend themselves to such streaming division.

The Nintendo Revolution's hardware specs are still largely unknown, although Hannibal has some good insights on it. However, using educated guesswork, the system appears to be relatively simple, using a single PowerPC CPU code named "Broadway" connected to a GPU and shared system memory. The Broadway CPU may be similar to the PPE, and it may be dual-core or even single-core.

All three systems appear to achieve higher clock speeds at lower power levels than standard PowerPC processors such as the IBM 970 (found in Apple's G5 computers) by leaving out Out of Order Execution, or OoOE. All modern desktop CPUs use OoOE to keep the processor from having to wait for data if it isn't available yet for the instruction that is currently requesting it. Complex circuitry allows other parts of the chip to execute an instruction that wasn't yet scheduled to be executed during that otherwise wasted time.

Because of the lack of OoOE, early tests of typical game code show that the performance of the PPE at 3.2 GHz is only about half of what its clock speed would otherwise indicate. Theoretically, a smart compiler should be able to create code that never stalls due to waiting on data, but in reality compilers are rarely smart enough to do this well with complex, branching code, the sort that all games are made of.

John Carmack expressed his less-than-enthusiastic reaction to the complicated new console architectures at this year's QuakeWorld:

Anything that makes the game development process more difficult is not a terribly good thing.

The decision that has to be made there is the performance benefit that you get out of this worth the extra development time?

There's sort of this inclination to believe that—and there's some truth to it and Sony takes this position—ok it's going to be difficult, maybe it's going to suck to do this, but the really good game developers, they're just going to suck it up and make it work.

And there is some truth to that, there will be the developers that go ahead and have a miserable time, and do get good performance out of some of these multi-core approaches and CELL is worse than others in some respects here.

But I do somewhat question whether we might have been better off this generation having an out-of-order main processor, rather than splitting it all up into these multi-processor systems.

So the real question remains: why exactly did Sony and Microsoft choose such complicated designs over simpler ones? The answer probably lies not with technology at all, but strategic marketing.

Battling for supremacy: when to be simple and when to be complex

There are basically two positions you can be in when you launch a new games console into the market. You can be an upstart newcomer trying to steal market share from the established brands, or you can be a dominant brand trying to defend your market share from the upstarts.

An upstart needs as many reasons for third-party developers to support the new system as possible. There will be tons of excuses that developers and publishers can use for avoiding your new system, so the best thing to do is remove as many possible gripes and make it as easy for developers to support as possible. Ideally, the new system should be a breeze to develop for.

Two examples of this are Sony's introduction of the original PlayStation and Microsoft's original Xbox. When both were introduced, they faced competition from much stronger and entrenched companies (in the PS1's case, Sega and Nintendo; in the case of the Xbox, it was Sony that ruled the roost). The original PlayStation had a very simple architecture with a single CPU and a 3D graphics processor, making it easy for companies to jump into the then-new world of 3D. This worked greatly to Sony's advantage over its competition, particularly the Sega Saturn. The Saturn had two CPUs, one of which was rarely used due to the complexity of programming multiple-CPU platforms (sound familiar?) Sony used this advantage to grab developers who were wavering between the platforms and Sega lost a ton of ground. The PlayStation became the dominant console of its generation.

Sega's fumbling and Sony's marketing expertise meant that the Dreamcast, which as an underdog went back to the simple architectural model and was again easy to develop for, failed to restore the balance of power. It sold moderately well, but all the buzz was about Sony's upcoming PlayStation 2. Publishers like EA declaring that they would not support the Dreamcast sealed its death warrant (despite the fact that the Sega 2k Sports games were actually superior to EA's offerings—remember, this is marketing we are talking about, not technology). Sony was now in the driver's seat, and so their strategic goals were now different. It was not an issue of needing new developers to come to the PlayStation 2. They wanted to hang on to their dominant position and prevent people from moving away from their platform.

The PlayStation 2's architecture, with its small amounts of RAM and VRAM, fast I/O, and four different coprocessors, was much more complicated than its predecessor. Many companies found it very difficult to program for initially, but with the collapse of Sega there was no real alternative out there. Game companies had no choice but to grind their teeth and figure out how to optimize for the PS2. The goal when you have a dominant market position is to make it very difficult for people to move away from your platform, and once you optimized code for the PS2, it was very difficult to port it easily to another platform. This provided the opportunity for Microsoft and Nintendo to come along with their simpler platforms and attempt to do to Sony what Sony had done to Sega, but the PS2 had already established a dominant market position and it was difficult for the new consoles to make much of a dent. Nevertheless, the Xbox and Game Cube did well enough to warrant successors.

In entering the next round of console wars, Sony believes they are starting off with a still-dominant position, and so have increased the complexity of the PlayStation 3 architecture in an attempt to lock in the next generation of developers. Microsoft, believing that they have seriously damaged Sony's position and will continue to gain share by launching the Xbox 360 ahead of the PS3, have gone to a more complicated architecture as well. Because of their knowledge of software development, Microsoft believes it can "have its cake and eat it too" by making the 360 development kits as easy to use as possible. Many developers, including John Carmack, have praised the 360 dev kits as being a step up from what they are used to from console companies. It is only Nintendo, still a perennial underdog, that seems to be promoting a simpler design for their Revolution console. With neither Sony's market advantage or Microsoft's software advantage, Nintendo is attempting to combine a simple development platform with unique types of innovation (such as the motion-sensitive Revolution "wand" controller) in order to maintain its position in the three-horse race.

What none of the three console companies have really foreseen, however, is the fact that rising development costs are causing publishers to force more and more cross-platform releases from the third-party development companies. All the moves made to try and distinguish consoles from each other by building complicated new architectures may, in the end, be pointless.

Problems with the theory, and conclusions

While it is exciting to make predictions about the future, one has to admit that a messy reality frequently gets in the way of neat and tidy prophecy. There are several factors which may complicate the move towards cross-platform supremacy.

The most obvious one is that all three console manufacturers maintain their own stables of developers: Microsoft with Rare and Bungie, Sony with 989 Sports and Sony Interactive, and Nintendo with their internal teams. These development houses will, by fiat, only be developing for one platform and will have plenty of opportunities to optimize for a single architecture. While a console cannot survive on first-party titles alone, optimizations in these titles may force other developers to follow suit or risk being seen as falling behind technologically.

Development tools may improve to the point where they can abstract away complexities in architecture. While no "magic bullet" solution has yet been found for parallelization, there may be new profiling tools released that help developers fix threading problems, making it possible for the next generation of games to support multiple cores with significantly less hassle.

Finally, some platforms may find a survivable niche and avoid the hassles of direct competition entirely. The PC platform has always done well with real-time and turn-based strategy games that do poorly on consoles, as well as massively multi player online games and realistic flight simulators. Nintendo's Revolution may usher in a completely different set of games that rely on the innovative controller and are not easily ported to other architectures.

There is also the issue that was raised by Naughty Dog cofounder Jason Rubin in a talk he gave in 2003. He noted that in moving from the PS1 platform to the PS2, the polygon count increased by a factor of 50, the refresh rate doubled, and screen resolution increased by a factor of 2.5. Discounting new special graphical effects that were also added, this is a 250x increase in graphic detail. However, consumers did not perceive a 250-fold increase in graphics quality! From a player's point of view, the quality jump was noticeable but not earth-shattering. The leap from this generation of consoles to the next may well involve a similar increase in graphical complexity yet only a modest jump in perceived quality. While dedicated gamers will certainly notice the difference, casual gamers may have a harder time realizing exactly why the next generation is so much better than the existing one.

Rubin pointed out that this trend may actually benefit gaming in the long run. If graphical differences are no longer so profound, gamers may gravitate towards games that have new and unique game play, or have superior storytelling. Instead of bickering over technical specs like antialiasing or frame rate, future gamers may spend most of their time discussing which games moved them the most emotionally.

The issue of spiraling game development costs and the increasing consolidation and power of the giant publishers still needs to be addressed, however. Many game developers have expressed concern that innovation is being squeezed out in the push for bigger and slicker sequels and licensed properties. The solution to this problem has not yet been found, but something tells me that if anyone has the creativity and brainpower to solve it, it is the game developers themselves.

Photo of Jeremy Reimer
Jeremy Reimer Senior Niche Technology Historian
I'm a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.
0 Comments

Comments are closed.