Thursday, 17 November 2011

PC Skyrim is not a "lazy console port".

So Skyrim came out. The blurb on Steam calls it "[t]he next chapter in the highly anticipated Elder Scrolls saga". I don't know about you, but I don't find myself lying awake at night wishing that Oblivion and Morrowind would hurry up and come out. "The highly anticipated next chapter in the Elder Scrolls saga," marketers.


Marketing grammar aside, comment threads and forums have been alive with praise and condemnation in what seems to be about equal measure. However, one note that caught my eye was variations on the theme of "lazy console port". It's ironic that the phrase "lazy console port" is a result of lazy thinking. This is the kind of knee-jerk condemnation that the PC gaming crowd loves to go in for, but it's not helpful. So why is it a lazy console port, and why is it not? A lot of these are things I've said on forums and comment threads myself, so if you've been following me around like some creepy stalker, 1) you've probably already read most of this, and 2) don't make me call the police.

The interface - navigation

Now, I'm not going to defend the interface. It's terrible. However, while I have not played the game on an Xbox 360, I have played it on the PC with an Xbox 360 controller, and let me tell you, that's just as bad as trying to navigate it with mouse and keyboard. If they've designed this for a console, they've done a poor job of it.


In fact, the UI seems like it would be a lot more at home on an iPad. And this is the problem with the interface: they've designed it for aesthetics before function. It looks lovely, but it's a pain to use. In their aesthetic design of the interface, they've quite clearly taken notes from tablet interfaces (which are very cool) and not thought properly about how that would impact the function of the interface on a different device. It's bad interface design, but it has nothing to do with consoles.

The interface - dual wielding

The only interface issue that I think might be a console problem is the dual wielding thing. On a console, see, the item in your left hand is bound to the left trigger, while the item in your right hand is bound to the right trigger. On a PC, this is reversed; the item in your right hand is bound to the left mouse button, and the item in your left hand is bound to the right mouse button. That sounds confusing, but it's gaming convention to map primary attacks to LMB and secondary attacks to RMB. Skyrim just assumes that your character is right-handed, so the dominant hand is mapped to the primary mouse button. In console conventions, the right trigger is often thought of as the 'primary' one.

That's all ok apart from how it's represented in the interface; when you equip items into your hands, they're marked with a little 'L' or 'R'. Does that refer to which hand is holding the item, or which button you should press? It actually refers to which hand is holding the item, but that's not clear. I hesitate to call this a console design issue, though. I think it's more just that console players are lucky their conventions don't raise any ambiguity here, rather than that the interface was designed with that in mind. Denoting into which hand your weapon is going is an established RPG convention, and it seems likely that the design of Skyrim is just following that rather than contemplating too deeply how that applies to the control system.

Mouse sensitivity and acceleration

Ok, let's start with mouse acceleration. It is the very devil, and I'll give you that it's a problem with console ports. However, given that it takes more effort to implement than not, I don't think it's a problem with lazy console ports. I don't understand why games persist in using it. Everyone hates it, and it's easier just to not implement it. (Update: Apparently I'm an idiot. Mouse acceleration doesn't, in fact, take more effort to implement than to not implement. If you use Windows' mouse input, you get mouse acceleration. To get around that you have to do your own hardware calls to the mouse, which is more work. But the corollary to this is that mouse acceleration isn't "to make it feel more like a controller" but "to make it feel more like the cursor in Windows", which still indicates that it's not a symptom of consolitis.)

Then there's the mouse sensitivity. For some reason, the mouse sensitivity is set incredibly low by default. That must be because it's designed for gamepads, right? Well, no. Although Skyrim appears to only give you one option for sensitivity, it actually has two; one for mouse and one for gamepad. It only shows you the option for the device you're currently using, but mouse sensitivity and pad sensitivity are stored by the game independently. If you switch devices, the sensitivity option will change to whatever you set for that device. This sensibility to the potential of using multiple devices is a purely PC thing; it can't be said that it's an artefact of a console port. So why the mouse sensitivity is set so low by default is beyond me, but it's not a console thing.


My one gripe is that the mouselook sensitivity also affects the mouse sensitivity in the lockpicking minigame, so having a reasonable sensitivity for looking around makes lockpicking very fiddly. But that has nothing to do with consoles. I will say that Skyrim's lockpicking with a mouse is seventeen times infinitely better than Oblivion's lockpicking with a mouse, but it's basically just the same as Fallout 3's lockpicking.

The 2GB memory cap

This is something that's come up less commonly, but I have seen it. Skyrim, you see, will only ever make use of 2GB of memory, even if you have more. "This must be because it's a lazy console port," comes the cry, conveniently ignoring that the Xbox 360 and PS3 both have 512MB of memory (including video memory) and nothing like 2GB.

No, it's not because it's a lazy console port; it's because it supports Windows XP. Windows XP 32-bit can technically address 4GB of RAM (including video RAM), but it limits individual programs to using 2GB (this can be raised to 3GB by a switch in the bowels of the system, which involves editing boot.ini in a text editor and so is not for the casual user, but this can cause instabilities and so isn't enabled by default).

There are still people out there who refuse to upgrade to Windows 7, insisting that Windows XP is "good enough". One of the advantages of PC gaming is that it can be; PC games can run on a wide range of hardware and software setups. But sometimes, ensuring compatibility on a broad range of systems entails compromises. It's ironic, though perhaps not unexpected, that a decision that was made to service one of the advantages of PC gaming is blamed on consoles. Although Bethesda could conceivably have made the 2GB limit XP-only, the point is that it's not a lazy console port; it's lazy XP compatibility.

Update: To clarify, without the large-address-aware flag set, 32-bit processes are limited to 2GB on all versions of Windows, not just XP. The reason it's an XP thing is that XP doesn't even support LAA by default without editing boot.ini. (Technically speaking, even 64-bit processes are limited to 2GB without LAA, but they have LAA on by default; you would have to manually turn it off when you were compiling, and, although you can, I don't know why you'd do that.) Incidentally, the Xbox 360 is 64-bit and the PS3 is 128-bit, so I'm not sure how making it a 32-bit Windows process could possibly be a concession to codevelopment for those machines. (And, according to the latest Steam hardware survey as of March 2013, about 25% of Steam's install-base is still running on 32-bit machines.)

Default FOV

Ok, you can have that one.

So there. Skyrim's not without issues, but very few of them have anything to do with its console development, and those that do are relatively easily fixed. I haven't addressed the complaint of "it's dumbed down" because (1) I've talked about how idiotic I think that phrase is before, and the people who blame it on console development are doubly idiots; (2) I surprisingly haven't heard that complaint much, excepting from the people you just knew were going to call it dumbed-down anyway; (3) that's a subject for a future post, and (4) I was too busy checking my surroundings for creepy stalkers. Go away.

Monday, 24 October 2011

What's Up With Australia's Game Prices?

I don't want to tread on anyone's righteous indignation, but I think it's time we got something straight. I've heard very many complaints from Australians about the kind of situation depicted below.

 
Spot the difference: Rage on Steam's Australian store (above) and on the US store (below).


What the? Aussies have to pay $30 more? How is that fair?

Well, it is fair. Let me put on my economist hat and tell you why.

The first thing you need to understand is that the purchasing power of a currency is not the same as its market exchange rate. Ok? When you go to a site like xe.com, all you are learning is how much of one currency you can buy for another currency, not the comparative value of those currencies. The value of a currency is determined by what you can buy for it. We call this "purchasing power". A currency's purchasing power can often be quite different to its market rate. If a country's economy is strong, like Australia's is right now, then purchasing power will be a lot lower than the exchange rate; if a country's economy is weak, then its currency's purchasing power might be higher than its exchange rate. Exchange rates can fluctuate wildly, but purchasing power tends to remain pretty consistent (inflation notwithstanding, but assuming two countries have roughly the same rate of inflation, their currencies' comparative purchasing power isn't going to change much). If only there were, I don't know, some cool way of comparing purchasing power directly without those pesky fluctuations of currency markets getting in the way.

Well, as it turns out, there is, and it's called PPP-adjustment. PPP (or purchasing power parity) works on the basis that things are going to cost pretty much the same wherever you go. As a simplistic example (cribbed shamelessly from an IMF page on the matter), if a Big Mac costs £2 in London, and $4 in New York, then the PPP-adjusted exchange rate would be £1 = $2. In other words, one pound is worth two dollars, no matter what the current exchange rate might be, because you can, on average, buy the same things in Britain for £1 as you can in America for $2. Like I say, this is a simplified example: the people responsible for the International Comparison Program survey, from which PPP-adjustment is derived, look at a lot more than just Big Macs - they look at a whole host of goods and services across the globe, comparing their prices (and it turns out £1 is actually worth $1.61).

To avoid confusion, we introduce a hypothetical currency called the "international dollar". The international dollar is like a US dollar, but it's entirely virtual. It's calculated from the findings of PPP-adjustment. Because it can't be traded, it's not subject to the fickle whims of the currency markets, and as such it is useful as a stable benchmark of the purchasing power of a given currency. That is to say, the hypothetical exchange rate from a given currency to an international dollar is the amount of that currency an inhabitant of that country would have to spend on average in order to get goods that, in the US, would be worth $1. The most recent international dollar rates are from 2005, but that's ok, because, as I said, comparative purchasing power doesn't change very much.

The most recent international dollar rates are available from the World Health Organization here (the rates themselves are in an Excel spreadsheet). So we'll look at that and find out that one international dollar is worth 1.39 Australian dollars. There's a big difference between that and the market rate, which is currently 1 USD to 0.96 AUD, but I said that might happen, since Australia's economy is relatively strong right now (Australia being one of the countries that was least badly hit by the global recession).

Let's do some calculations. Australians have to pay $89.99 USD for Rage. At current market rates, that's $86.06 AUD. Adjusting for purchasing power, that's $61.91 international dollars. Remember how I said that an international dollar is how much of a given currency someone would have to spend to get $1 worth of goods in the US? So in real, purchasing power terms, Australians are paying the equivalent of a whopping $1.92 more than Americans. So unfair!

This, like I say, is based on the assumption that goods and services are going to cost pretty much the same wherever you go. So, either literally everything in Australia is 50% too expensive, or games specifically are priced just right (compared with the rest of the world, at least). Given that the average salary in Australia is about twice that (at market rates) of the average salary in America, and remembering that salaries are decided based on purchasing power, and not market rates, I'd go with the latter.

I don't think that the Australians who are complaining actually realise this. It's not particularly a well-known thing. Australians want to pay the same price as everyone else, and that's understandable, because that seems fair. But that relies on market rates being indicative of purchasing power, which they're not. If Australians actually paid $60 USD for Rage, by the time you converted to AUD and adjusted for purchasing power, they would be paying the equivalent of about 40 international dollars.

My point here is that, despite what they think, Australians aren't complaining about how expensive their games are; they're actually (however unwittingly) complaining about how they can't have these games effectively cheaper than everyone else. Which, you know, wouldn't be fair.

Sunday, 2 October 2011

Virtual Tourism: I Lost Myself in Tokyo and San Francisco

I spent the other evening playing through Go! Go! Nippon!: My First Trip to Japan, Overdrive's first visual novel targeted at overseas audiences. The somewhat by-numbers romantic plot aside, Go! Go! Nippon! contains elements of travelogue. Players can choose three of six possible areas of Tokyo to visit, followed by a visit to Kyoto and then, depending on how the romance plot plays out (which depends on which areas you chose to visit), either Kamakura or Yokohama. It veers towards virtual tourism.

Meiji Shrine, Shibuya

Sadly, while the locations are beautifully drawn (and often link to photographs of those locations on Google Maps), and the textual information about the locations imparted by the two girls is interesting (at least, I learned some stuff I never knew), iconic, familiar landmarks (and even familiar views of those landmarks) are favoured. In a way, it's unsurprising that a title like this, targeted at overseas audiences, would show people the things most people want to see, but in the static frames of the visual novel format, the landmarks become decontextualised: more a highlight slide-show of a visit to Japan than a virtualisation of an actual visit, and it left me feeling strangely unsatisfied. One might suspect that that's the point: After playing, I've gone from "I'd like to visit Tokyo some day" to "I'd like a little more to visit Tokyo some day".

But I'm really interested in virtual tourism proper. It's not a substitute for real travel, but it's inexpensive and convenient. More than that, though, it's risk-free, and you're not limited to travelling to places in the real world. Let's say you want to experience life in the Old West, or in Pompeii before Vesuvius blew its top, or Renaissance Europe, or Edo-period Japan. Those places don't exist any more, so the only way to experience them is virtually. Further, you could visit worlds that can't exist -- a common setting for games, but games are... different. Even further, you don't have to be yourself when you visit these locations. You don't even necessarily have to be human.

Video games have given us an indirect taste of what virtual tourism might be like. Ernest Adams said that he was very excited about exploring Far Cry's Pacific island environment until he realised that he couldn't do it without someone shooting at him every 30 seconds. The Grand Theft Auto series allows us to visit 'gamified' (though it's not really anything to do with Gamification) versions of New York, Miami, L.A., San Francisco and Las Vegas. True Crime: Streets of L.A. and L.A. Noire both allow us to visit somewhat more authentic recreations of L.A. than does GTA: San Andreas (and, as a bonus, L.A. Noire lets us visit a recreation of an L.A. that no longer exists, albeit with the disclaimer that it's not entirely authentic), but they somehow feel less believable.

Let me tell you about an early gaming experience of mine. Visiting my dad at a young age, he let me on his PC and loaded up for me a game called Vette! This game involved racing Chevrolet Corvettes around a cubist San Francisco (the game was released in 1989, and the 3D technology of the time didn't permit much in the way of detail). As a child, I was always interested in subverting the object of the game and finding other play-spaces within the game's systems, so it wasn't long before I realised I could ignore the actual race and just go for a leisurely drive around a recreation of San Francisco. While my brother was probably more interested in the proto-Grand-Theft-Auto elements (you were, shockingly, able to run over nuns!), this was probably my first taste of virtual tourism; complete freedom to drive around the city and just see what was there (interrupted only if I happened to crash into a certain statue, at which point the developers posted a textual memorial to the Tian An Men Massacre and froze the game).
Ok, your virtual tourism could also be interrupted if you got stopped by the police for your reckless driving. Those poor nuns!
The thing is, video games give us a seductive glimpse of what virtual tourism might be like, but they're not actually that good at virtual tourism. Game environments are often the spatial equivalent of a movie montage: enough to convey the sense and salient details, but necessarily contracted and warped. Partly this is because of technical considerations, and partly it's for the sake of the gameplay. A street that takes five minutes to walk down in real life might take thirty seconds in a game, because spending five minutes walking down a street is boring from a gameplay perspective. Virtual tourism, unlike games, has a covenant of simulative fidelity: it needs to be accurate. Technologically, there's a trade-off between accuracy, detail, and scale. This is why World of Warcraft's Azeroth is, mile for mile, about the size of a small city; not so much a 'world' at all. Morrowind and Oblivion have earned a reputation as games the strength of which lies in being "a place to be" rather than "a game to play", but their environments are still designed in a very game-like way. Virtual tourism using existing game technology would probably be necessarily limited to very small areas, or else be rather unpretty, violating the expectations we -- thanks to games -- now have of virtual environments.

Other, non-game applications have moved towards virtual tourism as well. Second Life contains impressive recreations of real-life locations. But Second Life's problem is community-generated content. Much of it is ugly due to people not realising that just because you can contribute, doesn't mean you should -- and, though I'm no prude, the permissive nature of the user-generated content creates a pervasive air of seedy sexuality: you can almost smell the jism, which is naturally rather off-putting. Google StreetView is another application that suggests virtual tourism. In The Book of Japans, Momus (whose song Folk Me Amadeus I reference in the title of this post, because I'm a bit of a fan) gives instructions for recreating the journey taken by two of the novel's characters in Google StreetView. The problem with Google StreetView, I would suggest, is that tourism is not purely exploratory; you can see things, but it suffers from a lack of agency, and, like Go! Go! Nippon!, from the static frames in which you explore (albeit with an awful lot more of them, although using only the visual medium). Further, both Second Life and Google StreetView have navigation problems that interfere with the sense of presence that a good game can provide: surely a prerequisite for satisfying virtual tourism!

It would be nice to think we could just load up a program and visit some exotic locale, but the more I think about it, the more difficulties I see with that concept. Video games may make an attempt at distilling and approximating what we want from virtual tourism, but that relies on "what we want" from virtual tourism being an objective truth, rather than the subjective thing it is. For now, we have to close off some of the possibilities.

Tuesday, 20 September 2011

Rule Zero, Star Wars, Ludic Purists and the Perception of Fairness

This began as a meditation on "just what is an RPG anyway?" to respond to purists who wouldn't consider Mass Effect 2 an RPG, but as I was constructing my Civilization V argument for last week's rant (it's a strategy game with the emphasis on 'strategy', not a strategy game with the emphasis on 'game', which, I argue, doesn't mean it's "dumbed down") and as I was reading Tom Bissell's comments on "the Gamification of games," specifically Dead Island, I began to realise that my thoughts on RPGs resonated with games more generally.

Bissell talks about dice-rolling in Dungeons & Dragons, and perhaps that's a good place to start. There is an argument that has long raged in pen & paper role-playing game circles which, for historical reasons, is referred to by the shorthand "TSR vs. White Wolf". Make no mistake: this is a nerd argument, with passionate defenders on both sides. What it boils down to is what is known as Rule Zero, a concept to which my young and impressionable mind was introduced through West End Games' Star Wars role-playing game. Rule Zero holds that, if the rules of the game are deleterious to the narrative quality of the role-playing experience, they should be ignored, or at least taken as a guideline rather than as, well, as a rule. In short, "don't let the rules get in the way of a good story." People on the White Wolf side of the debate enthusiastically adopt Rule Zero (it's no accident that in White Wolf's World of Darkness games, the game master is known as a 'Storyteller'), while those on TSR's side of the fence either insist that the rules are more important than the story, or that if the rules get in the way of the story, then your story is poorly designed and "lol yur doin it wrong".

It's important that I came to understand Rule Zero through the Star Wars role-playing game, because it perhaps explains why I don't get on well with BioWare's Star Wars: Knights of the Old Republic. KotOR is a fine, even great, RPG (if, perhaps, a little over-reliant on knowledge of the D&D rules system on which it's based), but it's a terrible Star Wars game because it emphasises the rules over the experiential qualities that are such an integral part of why Star Wars is so great.

Lightsabers and a Tatooine setting sometimes aren't really enough.
 In PnPRPGs, in short, proponents of Rule Zero elevate the role of the game master for the shaping of the experience, while opponents elevate the rules themselves.

At first it seems counterintuitive that this concept could apply to video games. Video games, by definition, adhere to their rules. A video game cannot "bend the rules". It wouldn't know how. More importantly, it wouldn't know when to bend the rules and when not to. There are no varying degrees of concreteness to the game rules as there can be in a tabletop game; the rules are absolutely, 100% set in stone.

I would say, however, that Rule Zero is necessary not because there is something about rules per se that is inherently prohibitive to a good narrative experience ('Rule Zero' games still have rules, after all), but that it is because the rulesets of tabletop RPGs necessarily have to be transparent and tractable to human players. A video game may have far more complex rules than a tabletop RPG ever could, but in order to prevent a situation of complete cognitive befuzzlement (I think that's the technical term...), that extra layer of complexity has to occur, as Bissell says, "under the hood". These extra rules should not be transparent to the player.

The upside is that this means the possibility of creating rules that, in a pen and paper game, would be immersion-breaking. Why not have rules that ensure the story is good (apart from the obvious challenge of knowing what those rules are)? If the player never sees them, such "out of character" considerations can be safely implemented. At least, that's the idea behind my PhD.

The downside (if one can call it that) is that this represents a point of divergence between video games and traditional games. In traditional games, it is necessary for a player to know and understand the rules of the game. It's easy to dismiss that as just a practical necessity for playing the game, but it's more than that. It's necessary in order for the player to trust the game, and perceive it as 'fair'. This is why transparent rules are necessary in a pen and paper role-playing game: to circumvent the childhood Cowboys and Indians problem of one child claiming to have shot another child, and the other child denying it.

For that matter, this is precisely why the cliff racers in Morrowind were so maligned (aside from their overnumerousness); the player said, "I shot you!" and the cliff racer, apparently played by a six year old, petulantly responded, "No you didn't!" (which is still dialogue more gripping than Oblivion's "Good morning. That's a nice tnettenba." exchanges, but I digress). Bethesda's insistence on performing the combat mechanics "under the hood" rather than making the rules transparent to the player did make the game seem, at times, unfair.

Childish.
This is the point that the "gameplay is king" crowd are trying to get across, and it's not a point to which I'm unsympathetic. These ludic purists (let's call them what they are) believe that games should make their rules transparent in order to offer this perception of fairness. They believe that play should be defined through transparent interaction with the game's rules. They believe that the purpose of a game is to test the player's ability to manipulate the game's rules. This view is deeply coloured by traditional games and more than a little reductive, but despite that it's a perfectly fine preference to have.

What I am less sympathetic to is attempts to paint this personal preference as an immutable Law of Gaming. It's a labour under the misapprehension that 'game' is a suitable term for the electronic experiences in which we routinely partake. It's not. It's a misnomer, but one we're stuck with. It's an argument of semantics and terminology: Some video games are actually games in this sense, but only some.

This has always been the case, though. Traditional games tend to have only ludic rules, while video games tend to have some combination of ludic rules and simulative rules. This distinction is very important, but not always clear. Jenga doesn't need a rule for gravity -- nature takes care of that -- but a video game with any degree of physics certainly does, as does a tabletop role-playing game where the action is not physically happening. The difference is that in the RPG, the gravity rule is ludic (at least insofar as it can't be determined by common sense, such as precisely how much damage a character takes from falling), while in the video game, the gravity rule is more likely simulative.

To reiterate my argument from last week, Civilization IV is a game that tests a player's ability to manipulate the rules of the game, while Civilization V is a game that tests a player's strategic thinking in a more analogue way by making the rules less transparent. Ludic purists will inevitably see Civ IV as the worthier game, and that is not an invalid opinion. It is, however, wholly informed by personal preference, wholly subjective. Civilization V also stands, as I implied last week, as a sterling counterexample to the ludic purist's oft-advanced claim that without total transparency of the rules, there can be no strategy, and that therefore games that hide rules from the player are "dumbed down".

It's somewhat trite, at this point, to suggest that some people are more interested in the somewhat needlessly nebulous concept of 'gameplay', while others play games in search of emotional experience, but the argument between transparent rules and "under the hood" rules does seem to indicate that there is a trade-off between the "perception of fairness" of a game and the experiential complexity that game is capable of providing. I've deliberately avoided making reference to Caillois' ludus-paidia spectrum in this post because I don't think it's quite the same thing (although it's close), but I do see these things as existing on a spectrum. At one end of the spectrum is the Platonic ideal of games, with perfectly transparent rules, which exist only to test the player's ability to manipulate those rules; at the other end of the spectrum, the things we are reluctant to call 'games' at all -- interactive narratives and the like. Video games populate all points in between, not some bizarre monoculture in which one approach is right and another wrong. And that's totally fine.

Sunday, 11 September 2011

On Complaints.

The RPS Hivemind's very own King Jim wrote this week about how Actually, It's Okay To Complain in response to Ben Kuchera's statement on ars technica that In gaming, everything is amazing and no-one is happy.

Jim's statement is very well-reasoned, but it doesn't tally with the kind of complaints I see everywhere. The core of his argument, it seems, is that it's important to say how things could be better, because that serves as a motivation for developers to make things better. That would be fine, if people were complaining about how things could be better. Instead, they're complaining about how things are getting worse, which appears to me to be completely disconnected from reality.

Those who know me might know that I have a special place in my hate for the abuse of the term "dumbing down". The other day, I heard someone call Civilization V "dumbed down". Civ V is different to previous titles in the series, in that it makes you engage more directly with actual strategy than with trying to use the rules of the game to your advantage - it's a strategy game with the emphasis on 'strategy', rather than (like previous Civs) a strategy game with the emphasis on 'game'. Is that really dumbed down? And this is a pattern that exists with a lot of "dumbing down" complaints. There is a certain irony to the school of, "I don't get it, therefore it's dumbed down."

This is related to the gaming community's deep-seated mistrust of "streamlining". To understand streamlining, we have to look at evolution vs. intelligent design. No, really, we do. What a lot of opponents of evolutionary theory don't understand is that evolution works with what it has. To them, evolution doesn't make sense because they see evolution as a process of refinement leading to some pinnacle and constantly improving. Since evolution doesn't do that, they say evolution must be wrong. But no-one ever claimed evolution does that. Evolution selects those random mutations that confer a benefit (or at least, don't confer a detriment) to environmental fitness. But those mutations have to be applied to genetic templates that already exist. Evolution can't backtrack: If it could, we might see much more efficient creatures on Earth today than we actually do. What this has to do with games is that game mechanics have evolved. Despite the fact that games are provably and indisputably intelligently designed (well, indisputably designed, anyway), a lot of the elements that make up games today are the 'genetic' legacy of those games' forebears. We end up with a conglomeration of stuff that's been put together almost accidentally. Streamlining is the 'backtracking' that isn't available to biological evolution, but is available to game design, and it allows us to use our advanced knowledge of user experience, interface design, media psychology and all that good stuff that basically amounts to an increased understanding of the medium (which has experienced a boom thanks to this console generation's relatively long life-cycle, so people aren't hurrying to keep up with the tech and can concentrate on other game stuff) to create a more effective game. But, of course, people complain about that because People Fear Change.

Let's talk about Deus Ex: Unreal Revolution.



DX:UR is a mod for Deus Ex that shows what Deus Ex would have been like if it were more like the recent prequel Deus Ex: Human Revolution. Of course, the complainers have jumped on it as a defence of their complaints about DX:HR by... showing that the mechanics of DX:HR would be terrible in the original. Yes, they would. You know why? Because they were designed for a different freaking game. It's like saying that taking your opponents' pieces would be terrible in Monopoly, therefore Chess is shit. It's like saying Blackjack is an awful game because a royal flush is a losing hand.

And then there are the people who complain about a game when the ink isn't yet dry on the press release announcing the game. The game isn't nearly finished, and it won't be released for ages - the design probably isn't even finalised. I'm all for speculative cynicism, but they work on the assumption that it's going to be shite and go from there (often based on the bizarre tin-foil hat assumption that it's not in a publisher's best interests to release a good game). Well, you know what they say about assumptions, but this creates a dilemma. Of course, I want to read about games in which I am interested, but I also want to play the game without the priming effect of that critical mass of complaints being deleterious to my enjoyment of the game. Complainers are, in a very real sense, detrimental to people enjoying games.

Business practices are another common target for complaints, and the situation here is a bit more nuanced. There are some business practices that deserve complaints, such as the implementation of excessively restrictive DRM when the only people that are bothered by it are legitimate customers and not the pirates the measure is ostensibly meant to dissuade. Or the trend of medium-inappropriate marketing (read: trailers) rather than demos that let you find out if the game's any good. Or the deeply entrenched arcane traditions of retailers that mean games can't be released on the same day and date worldwide. But the tone of those complaints, often involving gruesome metaphors of being anally violated by the big bad corporation, and exhorting the Comrades to Fight The Power and Stick It To The Man is, let's face it, just fucking silly.

Offering a product for sale at a price that involves some kind of currency is not a "shady business practice", and it's not hard to see from where the accusation of a sense of entitlement that is often levelled at gamers comes. When I don't think a game is worth the asking price (for whatever reason), my usual course of action is to, you know, refrain from buying it. I don't generally feel the urge to start a revolution. DLC is very cleverly marketed to make you want it, but all the complaints about it are based on the premise that you need it, which is a false premise. Yes, a lot of it is overpriced and not substantial enough, but you know what? Probably 10% of DLC is worthwhile, just like 10% of everything else.

As an aside, related to asking prices, I want to pick up on something Ben Kuchera said in his article: "Every game is too expensive, although we demand ever-increasing levels of interaction, graphical fidelity, and length." More to the point, inflation means that everything else is priced twice as highly as it was 20 years ago, while games are priced the same if not less. We're effectively paying half as much for games as we did 20 years ago, but apparently that's not good enough.

Maybe Jim's more right than I give him credit for. Maybe the complaints with which I have a problem aren't the most numerous, but are simply the ones I notice the most. I'm not claiming to be unbiased and objective, here, and perhaps I'm just ignoring a multitude of more constructive complaints. But I think the most constructive conclusion we can draw from this is that, like everything else, complaining isn't 'good' or 'bad', but good and bad and everything in-between.

Tuesday, 19 July 2011

Participation Concrète

This is a post I've been wanting to write for a long time. It's an important thing to talk about for me, since it really sums up what I'm all about as far as games and interactive stories are concerned, so I wanted to get it right. I did touch upon it a little in an earlier post called The Meaning of Choice, but I really want to expound on that idea.

I was listening to a talk that Braid creator Jonathan Blow gave at the Montreal International Games Summit a couple of years ago. While it's definitely worth listening to, I couldn't help but feel that there was something he was missing. He was talking about the meaning of game mechanics. After a while, it struck me: he was considering games as rhetorical devices.

Thinking about art-forms in rhetorical terms is nothing new. See the Susan Sontag quote that I used in The Meaning of Choice. Lajos Egri made a similar point far earlier when he talked about the 'premise' of a drama. The premise is a statement that the drama is a process of 'proving' or demonstrating, much like a logical premise is proved by an argument. Very simple and neat. And, ultimately, all stories have this. It might not be anything profound, but there is a moral statement at the core of every story.

The problem I had with Jonathan Blow's application of this to games (if it indeed can be called a problem) is that while games are interactive, rhetoric is not. How can you offer a player a choice while trying to impart some moral statement to her? It doesn't seem to make sense. Egri suggested that drama must go the only way it can go to demonstrate its premise. How does that fit in with interactivity? The answer is: it doesn't. Rhetoric is not interactive. Rhetoric does have an interactive counterpart, however, which we call dialectic. It's a revolutionary new thing that those young firebrands Socrates and Plato are using against those musty old Sophists. Those crazy kids.

Let's talk about BioShock, since that was the game Blow criticised. The intended 'message' of BioShock appears to be that cooperativism is better than Objectivist individualism. Blow quite rightly pointed out that that message was lost in the interests of achieving game balance. And that's fair enough. However, if we change BioShock's premise from a statement ("Cooperativism is better than Objectivist individualism.") to a question ("Is cooperativism better than Objectivist individualism?"), BioShock suddenly amply (albeit unintentionally, as far as I'm aware) provides a space in which the user can have a "dialogue" with the game to try to answer that question. This is a rudimentary form of video-game-as-dialectic.

One of BioShock's creepifying Little Sisters. Harvest her for resources out of self-interest, or save her to gain her cooperation.
I see the video-game-as-dialectic working very well for moral problems that are subjective or ambiguous. These themes are what are all too often missing from games, and this is the reason 'mature' in gaming terms is a shorthand for what is really a juvenile fascination with blood, guts, sex and swearing. This is strange, because a video game is theoretically far better-equipped to tackle these themes than any other medium. Authors of traditional stories who flirt with ambiguity can only really propose a solution and have the reader accept or reject it, or else leave the solution open-ended and let the reader think about it. A video game, however, can assist a player in working towards a solution.

I don't want to get all prescriptive here, and definitely I think that rhetorical games are worthwhile (particularly in serious games, and extra-particularly in serious games for pedagogy, a research interest of the lab - I certainly don't mean to tell educators how to do their job!). However, from an artistic perspective, this is, for me, an important facet of the medium. I want to see games the moral premise of which is not a statement, but a question, and the process of which is not rhetorically trying to prove the statement, but dialectically guiding a player towards answering that question.

Saturday, 16 July 2011

Consequence-free

I've been thinking about the notion of consequentiality in games. Of course, we all know that it's important for a player's actions to have consequences. You do something, and it changes something in the world. That's basic agency.

However, I am unconvinced by recent arguments that players should have to deal with the consequences of their actions. A classic example of this is the permadeath game; you get one life, and when you die, you can't play any more. A less extreme example would be a game that only offers one save slot, and saves over it each time you make a decision, disallowing backtracking to see what happens if you make a different decision (short of starting the game again).

I used to think this was a good idea. I, too, was fooled into the thinking that allowing the player a do-over somehow cheapens the impact of their decisions. I thought it was a bad thing that I could choose to disarm the nuclear bomb and destroy Megaton, because if I did both, where was the choice? What meaning did a decision have if I could just reload and try something different?

Megaton, what will become of you?
But recently, I've sort of come to the conclusion that it has more meaning precisely because of that. When we play a game, even a game with the most simplistic of stories, we divide our experiences into those that are canonical, and those that are not. Getting killed by the first Goomba in World 1-1 of Super Mario Bros. is not what really happened, right? What really happened was defeating Bowser and rescuing Princess Toadstool. Since being killed right at the start is not what really happened, it's unimportant, right?

We're quick to dismiss the path not taken because it's not what 'really' happened, but surely nothing gives more meaning to our actions and decisions than these glimpses of how it could have been otherwise. These abortive tendrils of narrative thread that will never be followed through - but that nonetheless hang around in our consciousness - offer us insight into what our 'true' decisions mean. I can only guess at what saving Megaton really means until I've actually seen it destroyed. In the end, I do both. My decision is not of which action to perform, nor of which action is right or wrong, but, in retrospect, of which action was 'true' and which was 'false'. Which action was 'canon' and which was just a look at an alternate possibility.

Did that just happen?
We don't want linear games; we want choice. We say that a game should not be linear, and yet we still hold fast to the idea that our path through the game should be linear. After all, stories are linear, right? I don't know about that. Perhaps such adherence to the linear structure of the experience of traditional stories is dismissing the power of the video game medium just as much as adherence to the linear structure of traditional stories themselves is.

Long live quickload!

Sunday, 9 January 2011

The Meaning of Choice

Let's get some housekeeping out of the way, first. I haven't written anything here in a long time, due to some personal circumstances. I'm mostly better, now, and ready to write some more.

Today I want to talk about choice in videogames. Janet Murray defines agency as "the satisfying power to take meaningful actions and see the results of our decisions and choices". We've more or less unquestioningly accepted this definition, and mostly agreed that agency is a good thing. But it's about time, now, that we look at exactly what it is that makes an action meaningful. Questions such as "What is meaning?" are philosophically interesting, but the questions are a little too vague - and the answers a little too ambiguous - to be of any practical benefit.

I keep going back to what Susan Sontag said about authorship and "the truth of fiction". She said:-
I think of the writer of novels and stories and plays as a moral agent. In my view, a fiction writer whose adherence is to literature is, necessarily, someone who thinks about moral problems: about what is just and unjust, what is better or worse, what is repulsive and admirable, what is lamentable and what inspires joy and approbation.
Sontag later, in the same essay, observed that moral issues are not necessarily only concerned with what is good and bad, but also with what is important and unimportant. What she says is easily observable in Aesop's Fables, or in the parables of Christ, but she contends that it is present in all stories. If we realign our expectations to the thinking that the moral problem at the core of the story is not necessarily very deep, and not even necessarily consciously considered by the author, I think I can agree with this. And this, to me, is what meaning is. To be meaningful in the context of a story, choices necessarily have to have a moral component.

Dungeons & Dragons' alignment system
There have been no shortage of games that have attempted to leverage this moral component into their decisions. In such games, choices are usually cast as a choice between 'good' and 'evil', often written into the game mechanics with a metric of just how good or evil your character is. Lionhead's games, such as the Fable and Black & White series, are fundamentally based on the idea that you get to choose between good and evil. The Fallout series likewise has its 'karma' system, as does The Elder Scrolls series have its own metric of morality. The Neverwinter Nights games used Dungeons & Dragons' biaxial alignment system (above), which was an improvement, but not much. The problems with these choices are several. For one thing, the notion of 'good' and 'evil' is rather oversimplistic, which is why Fable III inconsistently conflated 'good' and 'evil' with 'popular' and 'unpopular'. For games to look at mature moral issues as other media do, we have to do away with looking at morality on a linear scale. Perhaps more importantly, though, they're false choices. We, as the player, are told up-front what the 'right' choice is and what the 'wrong' choice is. Why would anyone co-operating with the game's fiction pick the 'wrong' choice? (Parenthetically, it can be fun to play an evil character, but it comes at the expense of sacrificing our covenant with the narrative. "Good and evil" choices can therefore create a false dichotomy between story and fun, which no game should ever do. Ever. It's absurd.)

Authors of stories in traditional media have essentially two choices, and they choose one of these far more often than the other: they can present to the reader a solution to a moral problem, which the reader can then accept or reject, or, less frequently, they can present a moral problem to the reader and leave its solution as a challenge to the reader. Games are different. They can present a moral problem to the player and then react to the player's solution, following through on that line of thinking. In essence, I think of traditional stories as "moral rhetoric", and video games as (potentially) "moral dialectic". By telling us which choices are 'right' and 'wrong', game designers deny that potential, and limit games to the "moral rhetoric" of other media.

Fable III's choices failed because it told us which choices were 'right' and 'wrong'.
One way around this is the notion of subjective judgement. In games which employ subjective judgement, the game itself doesn't measure how good or evil the player's behaviour has been, but rather relies on the approval or disapproval of the characters (who will not always agree with each other) to provide that feedback. Subjective judgement has been tried a few times, but it rarely works. It rarely works not because subjective judgement is a bad idea, but because subjective judgement alone is not enough to make a choice meaningful.

Deus Ex employed subjective judgement to great effect. Complete the Castle Clinton mission non-violently, and Sam Carter would praise you, while Anna Navarre would criticise you. Go in guns blazing, and those characters' reactions would be the opposite. (At least, that would have been the case if not for some poor scripting which would, under some circumstances, cause a non-violent approach to be flagged by the game as a gung-ho assault.) The sequel, Deus Ex: Invisible War attempted the same trick, and pretty much failed. While DX:IW certainly presented the player with a lot of choices, the meaning of those choices was supposed to be imparted by the approval or disapproval of several characters, all of whom were equally unlikeable. Because it was hard to care what those characters thought, it was hard to care about the choices presented to the player, and those choices ultimately felt meaningless - a situation not helped by the fact that the player was free to switch allegiances at any point right up to the end of the game, no matter how badly they had pissed off a certain faction previously.

Deus Ex: Invisible War's choices failed because their meaning was contingent on the opinions of unlikeable characters.
BioWare's Dragon Age: Origins also did away with previous BioWare games' linear moral systems and introduced subjective judgement. With somewhat more compelling characters than Invisible War, it was a lot easier to care about gaining these characters' approval. And yet, DA:O's choices still felt arbitrary and meaningless. DA:O's failing was not in failing to provide strong characters to judge the player's choices, but in failing to provide proper motivation and contextualisation for those choices in the first place. It's obvious that, before loading the game for the first time, the player has never visited Ferelden. A lot of the choices the player is asked to make in the game (particularly early in the game, when a new player) are informed by things which the player character, as an inhabitant of that world, would know, but which the player herself, as a visitor to that world, certainly does not know. A lot of pressure is put on the player to decide what her character is like, and even then, to make uninformed choices. A choice in which we can't make reasonable projections about the likely outcomes is a meaningless choice. (Choice was further undermined by approval being a transparently mechanical thing: it's ok to pick a choice of which a character disapproves if you have a nice gift in your inventory to give them afterwards. It was a little upsetting to see the effects of my moral choices immediately overridden by my companions' rampant materialism, but that particular manifestation of ludonarrative dissonance is gab for another day.)

Dragon Age: Origins' choices failed because they were informed by things the player couldn't know.
To summarise, then, for a choice in a video game to be meaningful, it needs to be:-
  • Not pre-judged by the author of the game,
  • Judged by the approval or disapproval of characters whose opinions we can care about, and
  • Based on information the player has, so that she can make an informed decision based on likely outcomes.