The humanities as strategic, not philanthropic: when ‘techies’ meet ‘fuzzies’
Mozilla, the creator of the Firefox web browser, recently launched a $225,000 fund dedicated to artists. It might seem philanthropic, but on the contrary, they’d argue that it’s strategic. Its purpose is to broaden the discourse on challenges in artificial intelligence. Artists, comedians, and writers are keen observers of our world. They shape discourse. They point out our blind spots, and the sensitive corners we might rather not discuss. They become the checks and balances, those who ask the questions in a world of Google-searched answers. The truth is – in a world replete with information, wisdom is extremely hard to find, and one of Mozilla’s contrarian bets is that art can help point out flaws in technology.
Today there’s continual improvement in microchip processing power, and ever more sensors passively collect data about us and our world. What we overlook is that collecting data is the easy part. It is far harder to organize, and extract meaning from it. In this process, humans design the models, set the parameters, and write the code. The decisions around which data to include, and which data to exclude, are as moral as they are technological. Technology is only as good as its human inputs, and only as relevant as its impact on society. As Fei-Fei Li, a Stanford computer science professor and chief scientist for AI at Google, states, “There is nothing ‘artificial’ about this technology.”
It is made for humans, by humans. Our labeling of this deeply human process as ‘artificial’ is, in fact, deeply Orwellian.
We myopically categorize these problems as technical. We summarize them into neat buckets that we call words like “robotics”, “data science”, “algorithms”, and “machine learning”. Do we fear to invite in human fallibility? Do we merely over-hype the capabilities, pinning our hopes on the technological mitigation of all that’s potentially subjective? These challenges are as rooted in the humanities, or the study of human nature, as they are in the sciences, our study of the natural world, or in technology, namely our human attempt to manipulate the natural world around us.
At the heart of the self-driving car revolution are anthropologists who attempt to codify into code tacit human communication. In robotics, some of the greatest advances are happening where ballerinas are helping teach graceful movements to hard-plastic automatons, helping generate more human trust.
At Stanford University, where I studied as an undergraduate, we have too long falsely divided the world into “fuzzies” and “techies”, those who study on opposite sides of the aisle. Any physicist will admit that much of their work borders on the philosophical, and any mechanical engineer knows that building a successful product requires user experience, or anthropological interviews, to deeply understand the problem. Most good engineers are highly creative, and many musicians are mathematical. Social scientists labour over data, and artists such as Samuel Morse, who was a portrait painter by trade but who thought to view sound in lines and dots, invented Morse code, the telegraph, and the original “text message”. In short, our labels about these oppositional “Two Cultures” could not be more false.
Charles Percy Snow, a novelist and a chemist, lamented this chasm in his famous 1959 lecture at Cambridge University. Today the names have changed, but this adversarial debate persists between learning to code and studying the humanities, between artificial intelligence and ethics, between humans and machines. We craft neologisms such as the “Digital Humanities”, and “STEAM” to include “arts” in the middle of “science, technology, engineering, and math”, but the need cuts deeper than new departments and names. We might begin by deconstructing departmental purity and value posturing, by using data but recognizing that quantification alone is not sufficiency for truth. We might begin by lauding thinkers for asking the greatest questions, not just fighting to give the newest answer.
In India, where we judge our students according to testing ability rather than passion, lumping students into engineering, commerce, or arts at an early age, we only propagate this myth of separation. The best Silicon Valley engineers might well be Indian-trained, but I would argue that they are those who have broadly engaged the arts and humanities as well, those who have a deep appreciation for history, for literature, for music, and culture. Similarly, some of the most successful technology founders and CEOs, from the creators of Airbnb and Pinterest to AOL, Reddit, PayPal, and Slack, are all “Fuzzies”.
They all explicitly hold degrees in art, history, political science, or philosophy. We think of Mark Zuckerberg as a “Techie”, but all of his greatest insights have been psychological, not technical.
While we arbitrarily debate between degree equality and predisposition for success in a future world we don’t have the slightest ability to predict, we ought to invest in one thing we know for sure – the future is unpredictable and adaptability is of enduring importance. We need to give people familiarity with technology, but we must also equip them with their comparative advantage in a more machine-enabled world – namely, their humanity, their ability to communicate, to empathize, to collaborate, and to be confidently curious. The promise of technology is great, but arts and literature aren’t a luxury afterthought.
They are fundamental to the very human context that breathes life and purpose into computer code. We invest in the humanities not because of philanthropy, but because it is strategic, and it is what gives every last ounce of meaning to anything we ever build with our newest tools.
Scott Hartley is a Silicon Valley venture capitalist, and the author of ‘The Fuzzy and the Techie’, now available with Penguin India.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)