Monthly Archives: March 2014
So I’ve only just scratched the surface of the McCloud’s Understanding Comics by reading the provided chapter, but I’m already kinda hooked. My critical appreciation of the comic and animation art forms has always been a background interest that I’ve meant to expand on, but it’s never been a high priority. Just this little snippet on faces as icons has made something very clear for me: we don’t passively consume anything.
When we watch live action representations of abstract-to-us settings on television an in movies, we still have a host of kneejerk checklists we go through. We expect to see crash carts, medical beds, curtains, x-ray lightboxes, and high-countered nursing stations for a hospital setting to pass muster before we’re even willing to consider its characters and stories. This happens to varying degrees of complexity, depending on the setting depicted, but we simply do not accept only cursory attention to detail.
Yet we have illustration and animation that continually challenges the boundary between realism and stylistic choice. I think first of Archer, the FX adult-themed cartoon about a womanizing, emotionally-stunted, manchild of a secret agent and his often inept intelligence agency, ISIS. From its beginning, Archer has pushed a visual aesthetic that relies heavily on detail, and especially in the characters’ faces, as much of its humor is derived from face-to-face acerbic, sardonic, and sarcastic wit. We need detailed and realistic representations of Archer faces because we want to see more refined difference in emotional state. We don’t want just happy; we want nervous relief, wryness, cockiness, and schadenfreude. We don’t want just sad; we want humiliation, disappointment, demonstrably feigned indifference, and vulnerability. We don’t want just anger; we want outrage, offense, seething hatred, and sublimated rage. Archer’s character interactions cover these nuances and all the shades of grey in between in every episode. Perhaps because the production team places so much emphasis on facial detail, that defines the series aesthetic of detailed surroundings, too.
Turn now to The Simpsons. It would be unfair to say that our eponymous characters and their fellow Springfielders don’t also require this nuance in facial expression; Homer Simpson certainly has expressed all of the emotional states I just attributed to Archer (I’d certainly hope so in 25 seasons!), and his illustration has the necessary nuance to do so. The difference between Archer and The Simpsons is that the latter is is fundamentally not as emotive. The Simspons is a more physically-derived comedy (especially in the last half of its run), so while the characters are capable of being drawn for a similar range of emotional expression, it’s not as essential to their function in a scene. We need only to know the general state of their emotions. Thus, their faces are more simplistic and exaggerated representations as McCloud suggests, stripped down to the essentials to enable animators to amplify meaning in a way that realistic art can’t (30).
My final example is something of a counter to McCloud’s suggestion that basic faces are necessary, but also an affirmation to his belief that the reader will fill in whatever details are necessary if the illustration is an appropriate canvas to do so. XKCD, a founder of the webcomic genre, completely circumvents faces as a mode of expression. To my knowledge, the artist has never drawn a character with a face, relying instead on simple stick figures with one or two secondary identifying features, such as a hair style, a hat, an article of clothing, or a prop. Yet the blocking of the characters bodily, taken with the occasional inclusion of a setting or important item, has enabled the series to grapple with the gamut of emotion and happenings from madcap to sober. This is something of an apples-to-oranges comparison with two animated shows versus a static image comic, but I believe XKCD stands out as even more impressive as a result of this handicap.
McCloud’s suggestion that we are aware only of the fundamentals features of our own face as others might perceive them is interesting, and generally correct, but I think we are willing to see ourselves in even more generalizable patterns. The human brain is a fantastic pattern recognition system, and since we are so keenly aware of ourselves/others and our environments, we seek to sort everything into a pattern in order to make more sense of our inputs. I think the real take away is only that the messenger simply must not stand in the way of the message, as McCloud suggests, but defining where that line is crossed is the challenge.
Note: Back when we were trying to get back on track from the snow closures, I also wrote a post as an offshoot from the Barthes/Panzini ad assignment. You can see it here.
All kinds of arguments can be made about why pre-teen me attached himself so firmly to Star Trek (specifically The Next Generation): maybe it was accessible morality plays and overlays of real world issues (like all good sci fi); perhaps I appreciated the role models of strong, thoughtful, responsible men when I lacked a positive father figure; Trek certainly offered escapism from middle school unpopularity. All of those arguments would be valid, but for a child growing up on the cusp of the tech revolution, I also liked the shiny high-tech world the Enterprise crew inhabited. I of course liked high-profile devices like phasers, photon torpedoes, and tricorders, but also ranking highly amongst those devices was the PADD (personal access display device). I think it was easy and compelling to see the power of having access to a computer in the palm of your hand.
Move forward two decades and we have the realization of the PADD in the form of the iPad (that had to factor at least marginally in the naming choice, right?) and other tablet media devices. I’ll skip past the “magical” and “revolutionary” Apple product intro talking points – we can suffice with the fact that these are great, versatile devices that have opened up avenues of access and media saturation far broader than we could have imagined when they were non-functioning slabs of plastic on our television screens in 1990.
Yet these devices aren’t perfect. Perhaps we’re still in the phase of initial remediation Delagrange details in Chapter 2 of Technologies of Wonder, because I’m pretty sure that the Enterprise crew never had to deal with issues of media compatibility, non-supported file formats, and locked-in, low-functionality mobile versions of websites that were the only option for PADD access. We want our iPads to be universally functional stand-ins for books, full-OS desktops/laptops, and television, but we have to get by right now knowing that most things will work really well, but that we’ll also encounter the occasional website or service that just can’t translate to the tablet experience. It is then that we become acutely aware of the mode and its effect on the media we are trying to consume.
Delagrange notes the problems we may be experiencing right now as we realign our interactions and expectations of media:
With all media, but particularly with new media, the viewer experiences an oscillation between immediacy, the sense of immersion in or “liveness” of the medium, and hypermediacy, the ways in which the medium calls attention to its mediation . . . for most people, immediacy—a transparently “real” experience of a medium that erases the frame and appears to provide unmediated access to its content—is the over-arching desire of new media, and the desire of their users. (27)
And a bit later:
Hypermediacy, on the other hand, which calls attention to its mediation through the accumulative effect of stacking, layering, linking, juxtaposing, and other visual, verbal, and aural strategies, would seem to resist a unified perspective, offering a multiplicity of points of view on every screen … hypermediacy only reminds users of the immediacy they desire. If this is the unstated goal of remediation—a “new, improved” way to inhabit the same old unexamined Cartesian spaces and relations of knowledge and power—it is little wonder that conventional, conservative, transparent practices of “appropriate” academic discourse tend to reassert themselves in new media spaces. (27)
When the effect of this remediation kicks in fully, we’re reminded that we’ve not chosen the “real” thing. This reminder carries with it all the thoughts new media doesn’t want us to think – frustrations that content doesn’t “just work,” belief that the mode being used is amateurish or underdeveloped, or half-hearted plans to revisit the content in its optimized mode later – all of which risk translating into a negative reader judgment regarding the author(s) and their work.
Delagrange notes that this type of remediation struggle has occurred in preceding format expansions such as painting-to-photography, stage-to-film, radio-to-TV. I’m acutely aware that in these moments there were doomsayers that history looked badly upon once the shift was fully realized and it was clear the world had kept on spinning, so I won’t proclaim the sky is falling. I will note, however, that this shift is objectively different and may therefore produce different results. These medium shifts that precede were generally 1:1, content in one ubiquitous format moving to another eventually ubiquitous format. This shift from all of the above to new media is multi-channel. We have dozens of different options for consuming any single type of media, including the academy’s ongoing discussion. One app will work with a handful of streams, but not another handful of other sources. Content providers or device manufacturers, through business arrangements or self-promotion, choose to actively preclude certain providers’ streams altogether. On top of that, hardware in use is in varying degrees of compatibility. I can’t really use an iPad 1 anymore because Apple has stopped updating its OS and the apps I use to consume media streams are increasingly less compatible with the older, slower, less-capable iOS 5x. In a fundamental way, this fractured pressure to consume different streams, in different spaces, and on different devices will stand as a barrier to hypermedia unifying to monolithic, ubiquitous standard for quite some time, and perhaps indefinitely. This is a disincentive to publish in new media. We’re familiar with the channels we’ve been using for so long, and we can’t unify between one or even a small handful of channels, so the new media alternative is inherently more fragmented.
I’d like to think that in the unseen moments of Star Trek, we see Worf snarling and hurling his decrepit and barely functioning iPADD 35 across his quarters because the app he used to fill out his routine security briefings has crashed one too many times. His dark mood will continue into the turbolift to the bridge, where a smug Riker will sing the virtues of his iPADD 37S, and where Data will innocently (and therefore annoyingly to Worf) outline the virtues of the most recent build of Android OS. Maybe that’s why Worf is always in such a dour state.