The Archetypal Resonance of Classic JRPGs

When I learned about Xenogears, I knew that I had to play it. It was an irresistible package, seemingly tuned to my idiosyncrasies: a Japanese RPG from the 1990s golden era, featuring a famously deep (and convoluted) plot, and with a fanbase that still discusses its underrated status with almost religious zeal.

220px-Xenogears_box.jpg

Despite my excitement, playing through Xenogears proved challenging. I’d forgotten the unique tedium of Playstation 1 RPGs: a story that’s rife with uneven pacing - due in part to production constraints; lots of random enemy encounters; the cadre of frustrating mini games that you needed to beat in order to advance the plot. I became quickly aware of how my short my patience with games had become, and found myself stunned when the timestamp on a save point indicated that only eight minutes had passed since the last time I’d checked my cumulative play time.

The compelling aspects of the game kept me going. The 2D sprites within a 3D world provided a memorable aesthetic that I think aged far better than the purely 3D RPGs from the same era. The music throughout the game is incredible, thanks to an effort by Yasunori Mitsuda that literally cost him his health. And the story legitimately hooked me, right off the bat - featuring a blend of bizarro gnosticism, psychology, and mecha that really only compares to Evangelion. (Xenogears and Evangelion were contemporary productions; I couldn’t find a definitive account, but it would be stunning if there wasn’t some sort of cross-pollination between the projects.)

The early pain ended up feeling like an investment, which steadily paid off as the game progressed. The combat system, initially overwhelming and unclear, developed a compelling dynamism as the characters advanced. Each difficult boss conquered and story point reached felt like an accomplishment; this was a rare, weird game - and working through it felt like a valuable, uncommon expedition.

The story ratchets up at the 11th hour, with a standout “Room of Understanding” that rivals anything I’ve seen in any medium. By trudging through the archaic game design and witnessing the late-game exposition, I felt like I’d earned access to hidden information. Upon finally finishing the game, I excitedly dove into the world of analysis videos and discussions online; I was a newfound convert, ready to partake in the zeal.

Having worked through the lore and oddities in the extended Xenogears universe, I now feel compelled to work through the other great RPGs of the era. In addition to the mainstays (e.g., Final Fantasy 7-9), I’m especially keen to play through the other titles that have been forgotten or underrated, like Chrono Cross and Terranigma.

descent.jpg

It’s struck me that this all sort of strange; what exactly am I doing? Why do I feel compelled to undertake these virtual voyages? There’s the reflexive explanation: like with any other artistic medium, it’s worth experiencing the underrated classics. That’s especially sensible in the case of Xenogears - given the industry legends that contributed to the game. You could also point to nostalgia, the desire to feel iconoclastic, or some idiosyncratic motivation to reach backward to this specific realm of obscurity.

But maybe there’s some deeper resonance at play. Suffice to say, the average PS1 JRPG contains a very archetypal story. The plot revolves around the hero’s journey - featuring a protagonist that starts off in meager circumstances, who is then thrust into an epic quest. Along the way, the hero accumulates a ragtag set of friends; some of these friends are standard companions, and others have complex arcs that begin adversarially. Crew in tow, the hero then grows steadily in power and influence, eventually playing a pivotal role in an ultimate clash between good and evil.

1-cheating11.jpg

The gameplay is similarly orthodox: you start with a hero that’s weak and with few companions; the hero eventually grows, through increasingly adversity, into the archetypal leader that is capable of defeating the ultimate evil. The progression of difficulty in classic JRPGs is sometimes jarring and uneven - but there’s generally a strong correlation between the player’s growing capacity and the level of challenge placed before them.

I think there’s a very specific rhythm to actively playing through an archetypal journey, compared to watching or reading about one. You feel the frustration of the hero’s initial insufficiency; the relief of having a capable companion join you; the accomplishment of finding the way through a seemingly unbeatable battle. It’s a conscious psychological traversal that’s only possible through an interactive medium, and it seems especially distilled in the classic JRPGs.

room.png.jpeg

We’re living through a period of immense change, chaos, evolution - however you prefer to label it. There’s a growing feeling that most every conventional axiom is up for redefinition: consensus morality, the nature of personal identity, our rights and our responsibilities to one another. In a sea of societal flux, familiar mythopoetic stories can feel like a life raft; a girdling force, capable of vividly illustrating the physical and psychological patterns that have endured across millennia, and that will likely continue into the future.

Can games like Xenogears function as narrative psychotherapy? I’ll stop short of making that claim - but I also won’t dismiss the possibility. My humble plan is to continue playing through these classic RPGs, without any sort of clinical precision. I’m not sure what exactly I’m looking to surface, or at what point I’ll reach sufficiency; I don’t think it’s realistic to play every JRPG under the sun.

But for now, this feels like an exercise worth continuing. Hopefully the virtual traversals will give way to some clearer understanding, over time.

Thanks for reading; you can check out my Twitter here

Social Outrage in the Fourth Dimension

Most of today’s social media scandals emerge in one of a few ways.  Either there’s something recently posted that’s scandalous, and triggers an uproar. Or, there’s something hidden in the archives of someone’s social media channel, which is resurfaced to today’s more unforgiving eyes.  (e.g., Kevin Hart and the Emmys controversy.)

In other cases, people get in trouble for engaging online with someone (or something) incensory.  A politically incorrect tweet was retweeted; a salacious Instagram post was liked; an upstanding person is following someone with extremist views.  These sorts of scandals half a pretty short half life; it’s easy to chalk them up to user error (“I didn’t mean to do that!”), or redirect the blame (“It was my millennial staffer!”)

In an effort to stay out of flames of the culture war, many people are proactively scrubbing their accounts.  Unfollowing people that make for questionable associates; unliking tweets that might be hard to explain later; sometimes altogether shutting down their social media accounts.  With enough foresight, this approach can work reasonably well.  There’s technically still record out there, on some server somewhere, of what you did; but, in all likelihood, the surface area for an unwelcome digital scandal has been significantly reduced.  

It’s hard to imagine that things will stay this simple.

Think of a popular paradigm that exists today: Apple’s Time Machine application on the Mac, which gives you the ability to “go back in time” to previous versions of a given file.  This is possible through local indexing and copying, which happens on set intervals (or in response to specific triggers).  Now think about an analogous service, that’s capturing the transactional state of every public social media account, from inception onwards.  Kind of like of the Wayback Machine, but on steroids.

This is understandably unnerving - but feels inevitable.  People will need to assume that there will be a record of every public message, regardless of subsequent deletion; or of every person they’ve followed, even if they’ve subsequently unfollowed them.  Certain folks, like Jack Dorsey, believe that the eventual pervasiveness of blockchain technology will make online interactions truly permanent. 

I don’t think that’s necessary.  You simply need an aggressive extension of paradigms we’ve already seen work in more constrained systems.  If and when this sort of deep-scraping begins to escalate, I’d imagine that platforms like Twitter and Facebook will introduce new limitations on their APIs, and throttle the ability for snooping agents to build this sort of temporal knowledge base.

At that point, though, the retrospective ability isn’t gone; it’s just been constrained to the hands of the platforms themselves; the same as it is today.  Will it be an acceptable compromise to trade the ability for third-parties to deep-scrape public content for even tighter “stewardship” by the platforms?  Unclear; though it’s hard to imagine that this sort of API lockdown would hold water in the EU’s regulatory bodies, over any reasonable period of time.

At which point there’s maybe a regulatory compromise: users can get access to the “deep” history of their social interactions, but nobody else.  At this point, a user’s account becomes an even juicier target for pernicious actors.  You don’t just access to someone’s direct messages, but also every prior version of their follower/connection graph, and every piece of content they might’ve withdrawn association with.

Of course, this sort of escalation is contingent on people continuing to value what other people are posting, and who they’re associating with.  This seems likely, given the arc of human history and whatnot.  There’s a vicious tribalism that relishes in social crucifixion.  But there’s also an emerging, redemptive counterweight: in some cases, we’re accepting that people can grow beyond their online mistakes.

There’s a social realm that exists between hollow apathy and searing inquisition; it’s probably not a fixed position.  How can we incentivize people to stay in that realm, and to apply minimum necessary force when addressing social infractions?  

A question for the times.

Branching Beyond Twitter

Twitter is easy to pick on; it feels increasingly incoherent. For the average user, the experience now amounts to watching tweets endlessly (and algorithmically) flow down a timeline, hoping for a fleeting gem: a fresh meme, a particularly inspired presidential tweet, or an endorsement for something worthwhile.  You can try to prune and mute your way to sanity, but most curation features (e.g., lists) feel barely supported.

Paradoxically, Twitter also feels more vital than ever.  It remains the world’s digital public square - relatively uncensored, and gushing with content (and spam) at increasing velocity.  Interestingly, the worthwhile unit of content remains the individual; you can follow news organizations if you feel like drinking from a firehose, but the interesting activity happens between active users.

It’s unfortunate, then, that discourse on Twitter feels like it’s regressed since the early days of the platform.  I think the kernel of the problem is the tendency to get stuck where you start, with little recourse.  You join the platform, and begin by following other people.  This is in itself rewarding, since you can follow people at a granularity that isn’t possible on other popular networks.  (e.g., check out Nassim Taleb’s disdain for Sam Harris!)  But to get someone’s attention, you need 1.) some form of preexisting notoriety, or 2.) a particularly inspired tweet that grabs their attention.

Content aggregators, like Reddit, allow new posts to gain popularity through a different paradigm: topic-segregated channels.  You might have joined yesterday, but your post in the Gaming subreddit can get you a ton of Reddit karma, if you find just the right content.  The tradeoff with this paradigm is that content is truly king.  Submissions are effectively anonymous, and despite the occasional heartwarming exchange in the comments, the social interactions are overwhelmingly transient.  You leave the thread, usually never to return to it or its denizens.

Theoretically, Twitter’s hashtags provide a topic-like anchor.  In reality, I don’t know anybody that uses hashtags outside of live sports and other large-scale, transient events.  It’s a navigation lifeboat, used as a last resort. 

So what could a better modality look like?  If you think about why discourse is difficult on Twitter, a lot of it boils down to the UX.  If you’re decently famous: you tweet something, and there’s a gush of replies, smashed together like an accordion.  If you’re not famous, and talking “laterally” to someone else, or a small group, then the replies string together endlessly.  Someone else can try to jump into the conversation, or fork the thread, but typically with non-obvious consequences.  Even fruitful threads die quickly, and are difficult to revisit or revive.  (Which tweet did that conversation revolve around?  I can’t seem to find it..)

Branch was a social media service that tried a different approach.  Billed as the platform for “online dinner parties” (bear with me), it organized conversations around organic topics, and allowed - as the name suggests - users to branch the conversation, at any point.  A key feature was the separation of reading and writing.  Users could selectively include participants in a small-group discussion, which could then be observed by anyone else using the platform.

The result was, surprisingly often, interesting dialogue that could become progressively more inclusive - without devolving into chaos.  A lot of threads were simply interesting to read.  It was great if you were invited to participate - but even if you weren’t, you could simply branch a comment into your own thread.  The same rules carried over; you could selectively add people to your forked conversation, and continue on.  And who knew, maybe your forked dinner party would become the next hot thing.

1670904-inline-branch-with-text-box2.png

Branch’s user interface revolved around discovering interesting dialogue.  The fundamental unit wasn’t the individual post (with dialogue as addendum), but rather the conversation thread itself.  The application highlighted which threads were gaining popularity, and allowed you to traverse conversations that included the particular people that you found insightful.  It didn’t have a very elaborate UX, and it seemingly didn’t need one.

One consequence of the design was that it felt natural to (periodically) revive dormant threads.  Each conversation had a limited set of participants, and a coherent topic - which together provided a stable context that could be revisited.  Sometimes it made sense to simply tack on a new comment to an old thread; other times, branching was the answer.  And again, any reader had the same power; if you had a flash of insight, or came across something amusing, you could take someone else’s old conversation in a new direction.

At its best, this sort of seed-and-branch cycle felt resonant with the original ethos of the internet; a distributed and organic approach to building knowledge, that could endure.

Alas, Branch is no longer with us; the team was acquired by Facebook, and the service was shut down in 2014.  It’s peculiar that nothing similar has appeared, since.  Nothing that is conversation-centric in the same way; or that allows for conversational branching.  Interaction paradigms across the social media landscape feel increasingly static; Facebook and Twitter remain as largely as they were a decade ago, and the various chat apps have added bots and gifs, I suppose.  I hope we see more experiments like Branch, either as standalone services or within increasingly-vital platforms like Twitter.

We’re in dire need of better conversation - perhaps that’s the one consensus that still holds. 

Teradata's Lawsuit Against SAP

Teradata is a large enterprise software company; SAP is a larger enterprise software company.  At one point, the two were partners, working extensively on ways to harmonize their respective products.  Then things fell apart - dramatically; and in June, Teradata sued SAP.

What went so horribly sideways?  In a nutshell, Teradata alleges that SAP used the partnership to learn about Teradata’s core data warehouse offering, so it could then engineer its own competing solution.  Moreover, Teradata claims that SAP is increasingly focused on making its own competing data warehouse the only viable choice for interacting with its other, more established products.  To understand the fear and anger that now seem to be driving Teradata, it’s worth unpacking a bit of context.

sap_logo_2.jpg

SAP has historically dominated a category of the software market known as “enterprise resource planning”, or ERP.  Despite the painfully generic name, ERP systems provide a vital function: they manage the raw information pertaining to core business functions - including inventory, supply chain, human resources, and finance.  Whether you’re trying to understand your budget for the quarter, or calculating whether you have the appropriate inventory for fulfilling a client order, you’re probably interacting with some sort of ERP system.  (And there’s a high chance it’s provided by SAP.)  A standard SAP ERP installation is operating across multiple business lines within a single organization, processing millions of data transactions per day, and acting as the source of truth for information related to suppliers, finances, customers, employees, and more.

While SAP’s ERP systems provide critical capability, they don’t address every data-driven need.  Namely, ERP systems have not been historically optimized to serve the needs of analysts that need to aggregate findings from the raw data.  Asking complex questions of large volumes of ERP data turns out to be a technically challenging problem; users want the ability to ask lots of questions simultaneously, receive responses quickly, and work with both the analytical questions and answers in their preferred software tools.  Having a system that’s tuned to fulfilling these “business intelligence” requirements turned out to be so valuable that it gave rise to different class of companies.

Enter Teradata: founded in 1979, the company’s flagship “enterprise data warehouse” is intended to fill exactly the type of analytical gap that is left unaddressed by ERP systems.  The typical data warehouse is essentially a specialized database that sits atop a customer’s transactional systems (e.g., SAP’s ERP systems) - pulling in subsets of data, and storing them in a manner that is optimized for efficient retrieval.  Analysts can use familiar applications (e.g., Excel, Tableau) to quickly pull slices of data from the data warehouse, in order to answer questions and produce critical business reports.  Teradata claims to have pioneered a “massively parallel processing” (MPP) architectural design that allows use of its data warehouse to scale linearly across thousands of end users, without diminishing performance for individual queries.  

teradata-big-systems.png

The complementary relationship between data warehouses and the underlying ERP systems led Teradata and SAP to partner, and announce a “Bridge Project” in 2008.  The crux of the project was dubbed “Teradata Foundation”, a jointly engineered solution that promised seamless data warehousing functionality, backed by Teradata, for customers using SAP’s ERP systems.  Throughout the development process, Teradata engineers were embedded with SAP counterparts - and, according to Teradata, conducted in-depth reviews of the technical features (e.g., MPP architecture) that underpinned Teradata’s fast performance.  SAP engineers were also provided full access to Teradata’s products - though not to any underlying source code, it would seem.  Teradata Foundation was successfully piloted at one major customer facility, and Teradata claims that the prospective business opportunity was in the hundreds of millions of dollars annually.

As SAP worked with Teradata on the bridge project, it began developing its own database solution - SAP HANA.  In the summer of 2009, SAP announced its intention to revitalize its core offerings by providing a next-generation, in-memory database.  At the time, the investment HANA was primarily viewed as an attempt to sever SAP’s relationship on Oracle.  Oracle had long supplied the underlying database that backed SAP’s ERP offerings, and seemed to relish in eating into SAP’s core business - while simultaneously tightening its frenemy’s dependency on the Oracle database.  However, has HANA began to mature, it became clear that SAP’s aspiration wasn’t simply to escape from Oracle’s grip.  In May 2011, SAP announced that HANA’s architecture would enable it to serve analytics workflows in a first-class manner - eliminating the need for a separate data warehouse like Teradata.

Two months after flexing HANA’s features, SAP unilaterally terminated its partnership with Teradata.  In the following days, SAP unveiled a new version of its Business Warehouse product - powered by HANA, that was supposedly capable of servicing the complex analytical workflows that Teradata’s product had historically targeted.  Teradata was understandably alarmed; a valuable, reliable slice of their revenue was about to vanish - if their former partner had its way.  As HANA’s development ramped up over the next several years, the relationship between SAP and Teradata grew increasingly strained.

In 2015, German publication Der Spiegel dropped a bombshell report; it alleged that SAP’s internal auditors found SAP engineers misusing intellectual property from other companies, including Teradata.  Moreover, the audit specifically claimed that HANA’s development had improperly drawn on external IP.  Shortly after presenting their findings to SAP leadership, the auditors were fired.  It’s worth noting that one auditor tried to personally sue SAP, claiming that the company tried to suppress his findings - and asking for $25M in relief.  SAP denied any wrongdoing, countersued the auditor, and had the personal lawsuit dismissed.  (Another victory for the not-so-little guy.)

Der Spiegel’s report gave Teradata its call to arms.  The company assembled a lawsuit that asks for two-fold relief: an injunction on the sale of SAP’s HANA database, and a broader antitrust investigation into SAP’s move into the data warehouse market.  The first claim is pretty straightforward, if not a little wobbly: the report from Der Spiegel clearly indicates that SAP misappropriated Teradata’s IP.  Moreover, Teradata attests that it has evidence that SAP reverse-engineered its data warehouse, while the two were working as partners.  The wrinkle is that neither component of the claim provides a smoking gun; the Der Spiegel report doesn’t specify what was misappropriated, and Teradata hasn’t yet supplied evidence that the reverse-engineering occurred.

The second claim feels existentially motivated.  Teradata is understandably concerned that SAP will continue to make it more difficult for other data warehouses to work with its ERP systems, as it continues to develop and promote HANA.  A significant portion of the lawsuit is dedicated to describing SAP’s strength in the ERP market, and arguing that they are now using their power to anti-competitive ends.  60% of SAP’s customers plan to adopt HANA in the coming years, largely due to how it’s being bundled with ERP system upgrades.  Paired with SAP’s indication that it will only support HANA-powered ERP installations by 2025, Teradata sees its market opportunity dwindling. 

SAP has punched back, since the lawsuit was filed in June.  On the first claim, regarding misappropriated IP, they point to the disgruntled nature of the auditor who leaked the information - and who was denied $25M.  SAP attests that its own internal investigations surfaced no wrongdoing, and Teradata isn’t bringing forth specific evidence that suggests otherwise.  On the second claim, SAP has sidestepped any suggestions that it is locking down its ERP systems to work only with its HANA data warehouse.  Instead, they are painting a picture of natural competition, contending that Teradata is resentful that it hasn’t been able to compete in the evolving data warehouse market - and is looking to guarantee its historical marketshare.

Given the multi-year arc that’s typical of large technology lawsuits, we probably won’t see a verdict soon.  Teradata is shooting the moon, asking for both an injunction on the HANA product, and controls that will keep SAP from closing off interoperability with its traditional ERP systems.  Even partial success here could make a large difference to Teradata’s future.  SAP seems adamant to secure HANA’s future, and as past tussles with Oracle show, is clearly willing to engage in prolonged legal warfare.  Either way the case turns out, the verdict could prove meaningful for similar disputes that will inevitably appear down the line.

If you’re interested in reading through the lawsuit, you can check it out here

VisiCalc's Enduring Vision

Steve Jobs once quipped, "if VisiCalc hadn't debuted on the Apple II, you'd probably be talking to someone else".  Released in 1979, VisiCalc was the original electronic spreadsheet - and its widely recognized as the killer application that landed the personal computer on desks across Corporate America.

032c8bb0.jpg

These days, spreadsheet software seems about as interesting as plywood; it's the default way of "doing work" on a computer - and stereotypically mundane work, at that.  I personally can't remember ever using a PC that didn't have Excel or some other spreadsheet software installed on it.  I do remember learning Excel's basic functions on a mid-90s Mac, and saving my work to a floppy disk (which was ritualistically dismantled at the end of the school year).

VisiCalc was the original Excel; and its founding vision remains clear and compelling.  Dan Bricklin worked as an engineer at DEC in the early 70s, before heading to business school at Harvard.  Along the way, he grew frustrated with how rigid and cumbersome it was to perform calculations on a computer - especially if you needed to execute a lengthy series of steps.  At the time, programs would allow you to step through your work, one operation at a time; if you had to redo an earlier operation, tough luck: you needed to redo all of the earlier steps.  Even for complex engineering and financial workflows, it was the equivalent of working with a jumbo scientific calculator.

Bricklin imagined a virtual whiteboard, which would provide the user with tremendous power and flexibility.  Instead of being at the mercy of a simple sequential interface, you would have "a very sophisticated calculator, combined with a spatial navigation system akin to what you'd find in the cockpit of a fighter jet".  With operations split into individual cells within the virtual space, redoing work would simply involve gliding over to the appropriate cell, and making your change.  Critically, any changes would cascade across the entire file, automatically updating cells that depended on the modified value.

Bricklin decided to pursue the idea, using the business school's timesharing system to implement the original code.  After bootstrapping a first version himself, Bricklin recruited his college friend, Bob Frankston.  Frankston built out a production version of the "Visible Calculator", targeting the MOS 6502 microprocessor in the Apple II.  When the software debuted at the National Computer Conference in 1979, it quickly garnered attention from the well-established PC hobbyist community - and even more attention from the enterprise market.

At the time, PCs weren't a common sight within large corporations.  A handful of domain-specific applications existed for certain industries, but by in large, any computing work was done using large mainframes.  Ben Rosen, a prominent analyst at the time, saw VisiCalc's launch as a seminal moment; It was the first piece of personal computing software that could be utilized for broad categories of business problems, required no technical understanding outside of the program itself, and was priced affordably ($100).  Rosen's convictions were quickly validated by the market; by 1981, VisiCalc was arguably the primary reason that Corporate America was purchasing personal computers en masse.

02084c00.gif

VisiCalc transformed business workflows that had historically relied on the use of mainframes or pen-and-paper.  The most obvious advantage was speed; performing calculations by hand - pertaining to accounting, inventory planning, or myriad other business functions - was often tedious and error-prone.  VisiCalc introduced the formula system that's still a cornerstone of spreadsheet software today, allowing calculations to be intuitively specified.  Paired with the automatic recomputation, the difference was night and day.  Changing one variable in a complex forecast no longer required hours (or days) of manual recomputation; you could simply update a single cell in the spreadsheet, and watching the chain of formulas automatically refresh.

Beyond their commercial success, Bricklin's team deserves credit for introducing several high-minded computing concepts to the everyday user.  VisiCalc's formula system popularized an approach that would become known as "programming-by-example"; a new user could learn the software's core commands by simply tracing through the calculations in an existing spreadsheet.  This transparency opened the door to meaningful collaboration; instead of trading papers, analysts could work off of the same VisiCalc file, or synthesize results from different files.  For many, it was the first time doing any sort of computer-based work in a collaborative manner.

As the PC market grew in the early 80s, credible competitors began to come after VisiCalc's throne.  Bricklin and his team defended their dominant position for several years, before eventually losing ground to Lotus 1-2-3 (written by former VisiCalc employee, Mitch Kapor).  Lotus would enjoy success through much of the 80s, until Microsoft's Excel ascended to the top - a position that was massively fortified in the early 90s by the rise of Microsoft Windows.  

Forty years later, VisiCalc's legacy lives on in each spreadsheet created using Excel or Google Docs.  Every tier of the modern corporation runs on electronic spreadsheets - perhaps to an unsettling degree.  It's hard not to grimace at the large, macro-riddled Excel files that (barely) get sent via email  - and wonder if we're overdue for another leap forward.  While there are now countless applications for aggregating and analyzing data, none have managed to erode the ubiquity of the spreadsheet.  Those pursuing the next killer application would do well to learn from VisiCalc's clarity in vision, if they hope to build something that endures.

----

Images courtesy of bricklin.com

Recurrent Intelligence

Artificial Intelligence is in vogue, these days.  Whether it's Elon Musk tweeting ominously about the looming dangers of AI, or the daily gaggle of CNBC analysts discussing how AI will upend capitalism as we know it - you can hardly turn on a screen without seeing some reference to the impending, machine-god future.  It turns out that we've been here before, in some sense.

In What The Dormouse Said, John Markoff traces the early history of personal computing, and its curious intertwining with 60s counterculture.  Among the book's central characters is Douglas Engelbart, a computer scientist obsessed with what would come to be known as "Intelligence Augmentation".  The transistor was barely on the scene when Engelbart began writing about a humanity augmented by computing.  He believed that both the complexity and urgency of the problems facing the average worker were increasing at an exponential rate - and it was therefore critical to develop fundamentally new tools for the worker to wield.

At the time, this conviction wasn't widely shared by Engelbart's peers.  Developing computing tools that could amplify human abilities was seen as interesting engineering fodder; as a serious research focus, it was considered shortsighted.  Buoyed by the incredible advances made during the prior decade, artificial intelligence was the preeminent focus in computing research.  Computers had demonstrated the ability to solve algebraic word problems and prove geometric theorems, among myriad other tasks; wasn't it simply a matter of time before they could emulate complex aspects of human cognition?  

Engelbart had drawn significant inspiration from the writings of Vannevar Bush, the head of the U.S. Office of Scientific Research and Development during World War II.  Beyond his operational leadership during the war, Bush became famous for his instrumental role in establishing the National Science Foundation, and his musings on the future of science.  Having overseen the Manhattan Project and other wartime efforts, he grew increasingly wary of a future where science was pursued primarily for destructive purposes, rather than discovery.  Bush believed that avoiding such a future was contingent on humanity having a strong collective memory, and seamless access to the knowledge accumulated by prior generations.

In his most famous piece, As We May Think, Bush conceived of the "Memex", a personal device that could hold vast quantities of auditory and visual information.  He saw the pervasive usage of Memex-like devices as a necessary component of a functional collective memory.  Engelbart, like Bush, believed that the salient idea wasn't the ability to simply store and retrieve raw information; it was the ability to leverage relational and contextual data, which captured the hypotheses and logical pathways explored by others.  Engelbart extended this vision, imagining computing tools that would allow for both asynchronous and real-time communication with colleagues, atop the shared pool of information.

The computing community's focus on artificial intelligence throughout the late 50s and early 60s meant that Engelbart, with his fixation on intelligence augmentation, struggled to realize his vision.  Most of the relevant research dollars were flowing to rapidly growing AI labs across the country, within institutions like MIT and Stanford.  Engelbart worked for many years at the Stanford Research Institute (the non-AI lab), spending his days developing magnetic devices and electronic miniaturization, and his nights distilling his dreams into proposals.  In 1963, his persistence paid off; DARPA decided to fund Engelbart's elaborate vision, leading to the creation of the Augmentation Research Center (ARC).

The following years saw an explosion in creativity from the researchers at ARC, who produced early versions of the bitmapped screen, the mouse, hypertext, and more.  All of these prototypes were integrated pieces of the oN-Line System (NLS), a landmark attempt at a cohesive vision of intelligence augmentation.  In 1968, Engelbart's team showcased NLS in a session that's now known as the "Mother of All Demos".  The presentation is charmingly understated; Engelbart quietly drives through demonstrations of the mouse, collaborative document editing, video conferencing, and other capabilities that would become ubiquitous in the digital age.

From there, the future we know unfolded.  Xerox PARC built upon Engelbart's concepts, producing the Alto workstation - a PC prototype that sported a robust graphical user interface.  Steve Jobs would cite the Alto as one of Apple's seminal influences, prior to the creation of the Macintosh.  The Macintosh would become the first widely-available PC with a graphical user interface, motivating Microsoft (and others) to follow suit.  As the industry took shape, channeling the ethos of augmentation, Engelbart would see his convictions vindicated.  Alas, he would do so from the sidelines, growing increasingly obscure within academia while others generated unprecedented wealth and influence.

Despite significant progress in fields like machine vision and natural language processing, the enthusiasm around AI would wane by the mid-70s.  The post-war promise of machine intelligence was nothing short of revolutionary, and the hype had failed to deliver.  The American and British governments curtailed large swaths of funding, publicly chiding what they felt had been misguided investments.  In the estimation of one AI researcher, Hans Moravec, the "increasing web of exaggerations" had reached its logical conclusion.  The field would enter its first "AI winter", just as the personal computing industry was igniting.   

It's difficult to analogize the rise of personal computing; there is hardly an inch of our social, economic, or political fabric that hasn't been affected (if not upended) by the democratization of computational power.  While we don't necessarily view our smartphones, productivity suites, or social apps as encapsulations of augmentation - they are replete with the concepts put forth by Engelbart, Bush, and other pioneers.  Even so, some argue that the original vision of intelligence augmentation remains unfulfilled; we have seamless access to vast quantities of information, but has our ability to solve exigent problems improved commensurately?

Since the original winter, AI has continued to develop in cycles.  Suffice to say, we're in the midst of a boom; compounding advancements in commodity hardware, software for processing massive volumes of data, and algorithmic approaches have produced what's now estimated to be an $8B market for AI applications.  Media and marketing mania aside, there is basis for today's hype: organizations that sit atop immense troves of data, such as Facebook and Google, are utilizing methods like deep learning to identify faces in photos, quickly translate speech to text, and perform increasingly complex tasks with unprecedented precision. 

However, even the most sophisticated of these applications is an example of narrow AI; while impressive, it is categorically different than general AI - the sort of machine cognition that was heralded during Engelbart's time, which has yet to appear outside of science fiction.  Many of today's prominent AI researchers still consider general AI to be the ultimate prize.  DeepMind, a prominent research group acquired by Google, has stated that it will gladly work on narrow systems if they bring the group closer to its founding goal: building general intelligence.

Will AI become the dominant paradigm of the next 30 years, in the way that augmentation has been for the past 30?  Perhaps the question itself is needlessly dichotomous.  Computing has grown to occupy a central role in today's world; surely we possess the means to pursue fundamental breakthroughs in both augmentation and AI.  It's telling that Elon Musk, concerned about the unfettered development of AI, has also created Neuralink, a company aiming to push augmentation into the realm of brain-computer interfaces. 

The frontiers of both paradigms are expanding rapidly, with ever deepening investment from companies, governments, universities, and a prolific open source community.  As exciting as each is individually, it stretches the imagination to think about how the trajectories of Artificial Intelligence and Intelligence Augmentation might intertwine in the years ahead.

Buckle up.

Twitch As Surprisingly Convivial IRC

While aimlessly browsing Twitch a few days go, I stumbled across a channel that was streaming Final Fantasy XIV.  I've long been fascinated by the massively-multiplayer Final Fantasy games, starting back when developer Square Enix announced its first attempt, Final Fantasy XI.  These games have a daunting goal: distill the iconic design and narrative motifs that define the series's single-player experiences, and faithfully embed them within a compelling, persistent multiplayer world. 

As I watched the caster adventure through one of the game's dungeons, I had a lot of questions.  Was the combat as interactive as the animations made it look?  Was this a "pick-up group", or was the caster playing with friends?  What character class was the most popular?  Nowadays, what features made FFXIV stand apart from other massively-multiplayer games?  Typically when watching videos or streams, I let such questions drift out of my consciousness, unanswered; or, I resort to quickly googling the ones that nag at me.

Much to my surprise, the stream's chatroom was active - and it wasn't a blitz of emoticons, trolling, or other cacophony.  Rather, the caster was fielding questions from each of the ~10 participants, and generally facilitating a discussion among them.  I decided to pose the questions I had to the group; and I was again surprised - both by the speed and the thoughtfulness of the responses.  Someone would take a first swing at answer; someone else would respectfully insert a caveat; a third participant would ask a clarifying question.  It was a civil, productive discourse; and it felt totally bizarre.

After learning a satisfying amount about the state of FFXIV, I decided to peruse a few other channels - to see whether that experience had been a fluke.  I joined a few speed runs, which ended up being mostly silent affairs; with the scarce chat focused on pointing out suboptimal decisions the caster had made.  (The attempts at dialogue I made were met with not-so-thinly-veiled annoyance.)  The more popular channels, predictably, were simply too noisy to facilitate much of anything.  Casters either made periodic attempts to respond to the aggregate sentiment gushing out of the stream of emoji and exclamatory statements, or ignored the chat altogether.

A while later, I came across a Mass Effect stream that clicked.  The ~20 spectators in the chat were leisurely discussing the classic game, and the series's forthcoming Andromeda entry.  I ended up learning quite a bit about the upcoming game, which I'm now excited to play.  Shortly thereafter, I found a sleepy channel dedicated to playing through the older Zelda games.  The spectators and caster were engaged in a lively debate about which prior entry the forthcoming release, Breath of the Wild, seemed most reminiscent of.  Despite my relative ignorance of recent Zelda games, I stuck around for over an hour. 

On the surface, none of these amiable encounters seem especially noteworthy.  ("Oh, you had a not-shitty time talking with strangers on the internet?  Mazel tov.")  Perhaps, but I struggle to think of other popular sites that permit you to filter down to a specific topic, such as a particular game, and then [potentially] have a decent time interacting with other anonymous visitors.  Sites like Reddit foster thriving communities of all shapes and sizes; but those are very different than the near real-time, transient nature of these Twitch channels.  At their best, a few of the channels were reminiscent of a bygone era of IRC.

This was a long-winded way of saying: certain Twitch channels can be compelling places to learn about games, and chat with likeminded strangers.  It's bound to be hit-and-miss, but a decent heuristic I've found is: look for channels with ~15-30 people, and a caster that's making an effort to engage with the chatroom.  Take a swing at asking a few questions - and who knows, you too might partake in a cordial online experience!  

The Fracture

I keep thinking about coherence, in the context of modern media consumption.

Whatever you think about the editorial side of "new media" - most recently centered on the debate around fake news, and whether sites like Facebook have a duty to guard against it - I think there is a more fundamental topic that merits discussion: the sheer inundation of digital news, and its effect on our collective ability to converse with one another.

Michael Stonebraker, perhaps the most influential living computer scientist within the realm of data storage and processing, has famously stated that "big data" is at least three separate problems.  While he framed these delineations in the context of designing data systems, I've found them to be useful lenses through which to examine the media inundation.

He dubs the first problem "Big Velocity"; i.e, the increasing rate of inbound data that needs to be processed by a given system.  Applied to the premise, we see this problem driven by the increasing dominance of mobile-tuned media, where the speed of news (loosely defined) coming at an individual today ranges from excessive to alarming.  Many of us wake up to push notifications from CNN, Fox, Bloomberg, and endless other news sites; throughout the day we get stories pouring in from Facebook, Twitter, along with the dedicated news sites (unless you've wisely pruned your notifications); and even into the night, as the stream of fresh stories seems to crest, recaps and editorials seamlessly take over. 

Stonebraker identifies the second problem as "Big Volume"; or plainly put: the data we need to deal with is getting larger in size.  In the world of data systems, this necessitates the design of new software that is capable of operating over large quantities of data - irrespective of the velocity that they might be growing.  With regard to media consumption, this references our inability to fully process the pile of information that accumulates each day.  Sure - we skim the first few paragraphs of an eye-catching article; glance through our Twitter and Facebook timelines for the summarized bits above the fold.  Or, if you're like me, you queue up the more interesting articles for later reading, using Instapaper or Pocket...only to have to purge them months later, in a cathartic act of literary bankruptcy. 

The third problem is "Big Variety"; the number of distinct sources of information isn't simply increasing - but is growing at an accelerating rate.  Anyone that claims to provide data integration technology (or services) is trying to address some form of this problem.  Such efforts usually entail reconciling heterogeneous data sources into a common format, so that the integrated information can be leveraged in some way.  It's now a banality to talk about the explosion of variety in news media; where once ABC, CBS, and NBC reigned as a triumvirate, there is now a sprawling panoply of digital outlets - all competing for consumer attention on a much flatter playing field. 

Most compelling attempts to integrate the media firehose are constrained by category.  You can visit Techmeme for a roundup of tech news; Politico for political news; Hypebeast for fashion news; and so on.  Services that aim to provide broader integration, like Google News or The Huffington Post, tend to so at the expense of coherence.  (I trust these sites to highlight the most "important" story or two of the day; but beyond that it's cross-category noise.)

Consequently, there is no real integration.  The onus is on each consumer to pick from the buffet of media services (each curating content in a bespoke manner), and then decide how to triage attention among them.  Even for folks that predominantly rely on Facebook for information, manual feedback drives what content is surfaced; each person chooses what friends remain visible in their feed, what pages they like, and what articles they click on. 

The net result is that each consumer is experiencing their own bespoke form of media inundation.  Considering the volume and velocity with which content is pushed through digital information channels, the cause for concern becomes clear.  Post-election, the alarm around fake news (justified as it might be) is really a proxy concern for this accelerating balkanization in media consumption.  What happens to our collective ability to debate societal issues and assemble consensus, when every person is absorbing a custom blend of fact and fiction?

I think about a recent comment that President Obama made, when Bill Maher asked him about the prevalence of "bubble thinking" that hinges on misinformation.  Obama paused, and said that at a certain point, strengthened tribal identities give way to "interpretation through symbols" rather than reason.

I sincerely hope that the legacy of the information age - built atop science and reason - does not include the widespread erosion of consensus reality, in favor of fractured and self-reinforcing individual realities.  Societal progress, quite literally, depends on our continued ability to reason with one another.