After a deeper look inside the technological black box, we have a better understanding of what is being input in the technology we use, the blind spots we’d rather not think about. From racial bias in the algorithm, to digital panopticism and surveillance, to how those same algorithms are used to spread disinformation.
What does all this looking inside reveal?
Let’s start by looking at what some have already said about technology. For Martin Heidegger, technology reveals 3 things:
1. Technology is not an instrument,
2. Technology is not the product of human activity, and
3. Technology is the highest or ultimate danger.
What does he mean by this?
What does he mean by this? First, Heidegger says that technology is not an instrument, but a way to look at the world. It comes from the greek word techné, which means to come into being. Depending on whether or not we see technology as a neutral instrument (which is debated), or if it has bias (which is also debated), we are looking at the world through a different lens, in this case, a technological one (Verbeek). It shapes how we view the world around us. Technology is a way of thinking, a way of looking at and interpreting what is around us.
Next, he says technology is not a product of human activity. We think because we are able to make technological things that they are a product of us (Verbeek). However, according to Heidegger, the things we create are influenced by everything around us, things we see, don’t see, things we experience - and because of this we can not truly understand what brings the technology into being. Its origin is unknowable (Verbeek). We don’t (and can’t) choose how we understand it, nor do we choose the frameworks in which we understand it, so though the final product may appear to be created by us but really comes from somewhere else, an unknowable place.
Last, he says technology is the highest or ultimate danger. This is for two reasons, first because we might stop seeing ourselves as beings that can have these interpretations of the world and see ourselves as raw material. Second because every attempt to develop a new understanding of the world is a way of exerting power over the understanding. There is no escape from the power over the will to power (Verbeek).
An example is hydroelectric power.
This is where a turbine converts the energy of flowing water into electricity. For Heidegger, first we have to have a technological view of water, we have to see the power that water possesses. Wanting to cultivate that power, where this idea originates from is influenced by our environment, our upbringing, people, our community needs, countless and unknowable things. We only see the power inherent in the water, because we see the water as raw material, we then exert our own power over it to draw the hydroelectric power from it. For Heidegger this is our greatest peril. Technology not only has the ability for us to see ourselves as raw material, but once we do, we cannot escape the will or desire to exert power over it (us).
Another example is a car.
A car can only come into existence because we see nature as raw material. Without seeing things like aluminum and copper as raw material, the car could not come into existence. The images that pass by our window as we drive or are passenger shape how we see the world (FutureLearn). Cars then become the source and outcome of technology, as Heidegger sees it. What then, does a self-driving car reveal? We have now not only exerted power over the technology, to the point we can be removed, but how does this shape the source and outcome of the technology? What does this say about us as raw material? What does this reveal?
Revealing or Unconcealedness.
Let's look at what Heidegger says about this revealing or unconcealedness? For Heidegger this disclosure is found in the open. For Heidegger, this revealing or unconcealedness as he calls it, is a space of absolute truth, or Being. Heidegger talks about this space in relation to humans and animals. This concept is taken from the eighth of Rilke’s Duino Elegies. For Rilke, because man is in the world, and has the world around him he can never enter or go outside of this space, only the animal can move in this space. The animal sees the open. For Heidegger, because the animal is unaware that he is in this space, the animal is shut out of this space. For Heidegger the distinction between man and animal is language and the ability to enter into dialogue. It is our language that enables us to come face to face with the open, and ultimately Being.
For Giorgio Agamben, the open is the space where humans and animals become indistinguishable. For Agamben if language is what puts us opposite the open, then the lack of language is also when the confrontation ends. For Agamben this moment of suspension is where humans and animals are indistinct, and humanity (as well as animality) are reconciled.
For Agamben here is where we have a “remembrance of the oblivion,” a space where we can remember to forget the possibility of being (Bartoloni).
For these thinkers, we understand ourselves in relation to animals and language. However, as technology becomes an integral part of our lives, as algorithms collect more data on us, and AI becomes more aware, this blurring of lines between humans and technology becomes a very real space. What happens when we examine this space not just for humans and animals, but for humans and technology? If it is language that helps us confront this space, what is reconciled when we do not speak the same language as the technology.
I’ll start with an example
I’ll start with an example: In 2017, Facebook challenged two AI to negotiate a trade, this resulted in the two chatbot AIs making their own changes to the English language so they could more easily communicate with one another (Griffin). These modifications were not understood and could not be interpreted by the humans watching. Though their chat was incomprehensible to humans, the AI were able to successfully negotiate the trade. Facebook terminated the experiment.
Humans no longer understood the language created by the AI. Humans programmed the AI using a, or multiple programming languages, then taught the AI to speak to one another using the English language. The AI then used English to fulfill certain parameters then created their own language.
Those who wrote the code were influenced by their own world view, just as the AI were influenced by their own world view and created their own technology to communicate bringing us back to Heidegger's second point regarding technology, it is not a product of human activity, in this case, not a product of the AI’s activity, but impacted by the AI’s world view. The AI created a new language, developing their own technology for better understanding of their world view. It removed the need for humans, like the self-driving car, it became the source and outcome of the technology. In this case, we cannot begin to understand what it (the AI) sees, or how it (the AI) began to exert its power over the natural resources (the coding language, the physical structure that contained it, or even the humans surrounding it) it had access to. For us (the humans), we only saw the outcome, and we didn’t understand it. How do we begin to?
What is revealed or unconcealed?
What will technology like AI see when we (humans) are reflected back to it? And what will be revealed to humanity when the confrontation happens between the language of technology (in this case AI) and our lack of inaccessibility to technology’s language? What does the self-driving car understand? What is the car itself able to see and experience? Will it see its own self as a source of raw material? What rights will it view as inalienable? If we swap out the engine or the motherboard is it the same vehicle, with the same experience? Is the open the space where it reconciles the raw material of its own humanity or technocality? Will AI experience the open for itself? Will it see itself reflected back through our eyes? Will it be able to reconcile its own Being? Will it see itself as technology created by humans, or only see humans for the raw material that we possess?
Yuk Hui reconsiders these western questions of technology, with cosmotechnics.
Hui challenges Heidegger's idea that there is only one kind of technology, for Hui there are many. If technology is indeed influenced by the unknowable (Heidegger’s point two) then it is specific to Western culture since this idea originated from the word techné, which is Greek, and only came about in Western philosophies. If it is specific to Western culture then how do other cultures connect or reconnect to something there was never an origin for in their own culture? Hui believes a new cosmological understanding of technology that takes culture into account, that is non-universal must be made.
If technology has multiple origins, and there are many kinds of technology, is it possible to experience someone else’s Being?
Speculation:
If technology is, as Heidegger suggests, not an instrument but a way to look at the world, and according to Hui, there are many types of technology, then is it possible through technology to experience someone or something else’s Being? If the open for technology is created by the confrontation between human and technology, but that technology could be one of many, and the languages one of many (consider the many
programming languages in use), then is the open that is experienced not universal? Can two technologies experience their own open if they speak different languages or have different technological technologies (Heidegger’s point one) undisclosed to us? And how is that reflected back to us? Would we even know it is happening or happened? Would we have access to this iteration of Being?
With virtual reality today and the use of techniques like embodied montage we are already blurring the lines of what are able to experience. Embodied montage draws from the idea of film montage and is adapted to virtual reality (VR) story telling. In montage, rather than creating two sequences of shots that follow seamlessly, filmmakers take to contradicting shots that when combined create a third meaning that didn’t exist in either shot alone. In VR, this third meaning can be done with new connections between the body and environment and between action and perception. IRL we expect certain movements to do certain things. When we look at something, we see what is in front of us, in VR our eyes are no longer restricted to only seeing what’s in front of us. They now are in an environment where we can see what is behind us without turning our heads, each eye could see different things, we can see the past, future, or the act of looking can trigger actions and specific movements. Real life gestures no longer have to mimic real life action (Tortum, Sutherland). We can experience other places and spaces in time as if we were actually there. We can experience other people’s memories, lives, etc. One woman resurrected her dead daughter in virtual reality (Al Jazeera). Is it so far-fetched that we would be able to experience their Being?
Much of modern technology is a black box, sealed off to us, undisclosed, unrevealed. Even if we are able to look inside, do we really understand what we are seeing? Even if we can read the code that goes into it, if you’ve ever tried to debug someone else’s code you know how each coder is unique, and in many ways their code a reflection of their thinking, a shadow, only a piece of who they are, and still unconcealed. Even if one builds the microchip, it doesn’t mean they know what their microchip will be used for, or how it will be used. I like to hack hardware. Personally, I like to think of this as a repurposing of things. For example, I hacked my Wiimotes and used them to control a drum patch I made in Max 7 because I wanted to use my Wii controllers as a musical instrument. I later ended up using them to control video, and sound activated by motion and button/trigger presses. I turned them into an audio-visual instrument. Nintendo didn’t make the Wii controller to be used for this purpose, what does the intention versus the actual outcome of these technologies reveal?
How Does Truth get Revealed to Us?
Does it become as unknowable as Heidegger's animal in the open. Or is the suspension of human and technology already there, does it already exist in every piece of modern technology that we use? What does modern technology already reveal to us about our humanity? Or is technology a potential gateway, and by using different technologies we can gain access to other understandings, tools of Being and potentially access that which is undisclosed to us?
My technological exploration has revealed (to me) 3 things about technology: interaction, interchange, and access/accessibility.
Interaction
What do I mean by this? Digital technology is interaction. We directly communicate or have direct involvement with someone or something. We have the capacity to have a direct effect on the behavior or development of someone or something as well as ourselves. Because of the speed of digital technology (in Heidegger’s view that technology is a world view) technology has now become a behavior (in the world, as well as in virtual spaces).
An example of this comes out in the way we speak. We tell people to “Google it.” It is not just a search feature, but a way to explain to someone to find something out for themselves (usually an important topic, that someone hasn’t taken the time for themselves or bothered to become familiar with). Swiping right or left. This moved from the dating app world of Tindr to real life space of saying it. These are interactions, ways we behave, engage and each has specific behaviors associated with them. There are also distinct personalities. For example, people who use iPhones vs Samsung, Apple iOS vs Windows OS. I know people who have Apple tattoos. There are culture wars over Apple and PC, just as there are culture wars over Xbox and Playstation.
Interchange
Next is interchange: we can exchange things, ideas, commerce, data, even put ourselves in each other's place for a moment with virtual reality. That interchange can also be unknown to us. We don’t always know who we are exchanging information, and data with, or who is even watching. Regardless, digital technology has created a space where we are never alone. There is always interaction and interchange whether we want it or not.
An example of this is google docs or any of the google apps. If I am online, I am being tracked with cookies as part of their terms of service, I am allowing my data to be collected, and I have no idea what data is being collected or how that data is being used. I am having an interchange with another entity even if I am alone in my room doing homework in my pajamas. Regardless of my knowledge of what is happening on the other end, I am still having an interchange. It may not be as overt as if I am posting on Twitter and someone responds, or “hearts", or retweets my tweet (even if someone doesn’t directly do one of these things I can still check my stats on a tweet and see how many interactions [interchanges] my tweet has had by nameless/faceless entities), but it is still happening. If I have location data turned on on my phone, if I am using Alexa or Siri, whether I am actively engaging with them or not, data is still being collected, and interchange is still happening. I just don’t have access to all that is being exchanged. Exchange from interchange isn’t always equal. The information being collected or observed doesn’t mean that it benefits everyone the same.
Access/Accessibility
And last, access/accessibility. First, access is entry, the permission to enter, or ability to enter digital technology. This comes in many forms, terms of service (ToS), posting online, sharing a Netflix password, data breaches, Google Home, even having the location services turned on your phone. The second part of this is accessibility and who has the ability and permission to enter your digital lives, but also who can own, obtain, or acquire digital technology in the first place.
Digital technology is supposed to make our lives easier, more connected, but it has created unlimited access points to our lives. Even those who wish to be unplugged from the world are deeply impacted by digital technology. With ToS of credit cards, store loyalty cards, etc.. We may choose to not use a smartphone, but if our friend does, our meeting with them can still be recorded by their phone's location data. Our contact information is still stored in their phone. The Equifax data breach showed that as long as you have a credit history, any credit history, access to you is possible, regardless of your personal digital footprint. Did you get your $100 from the Equifax data breach (Equifax Data Breach)?
The COVID-19 pandemic showed us the disparities in accessibility to digital technology. The attempt to move classroom and workspaces online proved difficult for underserved communities and communities of color. There were stories in the headlines of students sitting in parking lots to use Wi-Fi they did not have access to at home (Inside Higher Ed). A quarter of low-income teens were found to have no access to a home computer, and one in five low-income students were found to have unreliable access to the internet or a home computer (Auxier). Access is an accessibility issue.
Digital technology has become a way to interact or behave with the world. Heidegger used technology as a way of thinking of the world, today with digital technology we use technology as a way to interact with the world. We behave technologically. We no longer accept or reject people, we swipe right and left on people. The language we use on texts/Tweets/Posts has become the language we use IRL. Terms like OMG (oh my god), YOLO (you only live once), even my use of IRL (in real life) is cyberspeak that has crept into our everyday language.
Back to the Car
Let’s go back to the example of the car. For Heidegger, the car would be the source and the outcome of technology. We drive it, and it changes our point of view of the world. We also have to see the world as raw material in order to bring it into existence. Heidegger’s view of technology dismisses or minimizes the role of the driver. With digital technology, it is so embedded in our lives, that technology is a behavior, an action, and it is just as important (if not more so) than technology as a meditation on our lives. The driver of the car has responsibility. They have to perform certain actions and behaviors, and agree to certain laws, and have certain qualifications in order to be able to drive in the first place. There are consequences if they don’t. The behavior of driving can be impaired, be impacted by accessibility, interactions by passengers, other cars (on the road, for sale, being built, etc..) and the world at large. By removing the personal responsibility of the driver in Heidegger’s technology, he has removed any responsibility we have to the technology. The resources, the world, the technology is there, so the only option we have is to see it as raw material and exert our power over it, but is this true?
Personal Responsibility
With digital technology, our very language has become that raw material. One of the key elements that gives us awareness of the space of the open is reduced to raw material. If Heidegger’s view of technology is correct, then we must exert power over our own Being. The language that allowed us to enter into dialogue is now raw material, and we will only ever see the power in and over our own Being. For Agamben, we are already indistinct. The lines between human and technology have already blurred with this constant confrontation with language we have every time we interact with digital technology. Each time we interact with our phone, our online profile, our email, Alexa, our smart TV, these languages collide, and we are indistinguishable from technology. Whether we realize it or not, his open is happening all the time. Our collision with Being, our moment of suspension, we are already indistinguishable from the technology.
By restoring our interaction, interchange and access/accessibility with digital technology, by putting ourselves back in the driver's seat if you will, it doesn’t just place a body or a placeholder in the seat, it creates a space for individualization. For personal experience, for personal responsibility. It forces me to look at what I bring to the technology, to the thought process, to the outcome while I’m in the driver's seat. What is my background, what is my privilege, what can I change, who can I help, who should I help and why does it matter? As the driver I have responsibilities, rules to follow, do those rules need to be changed, who do they benefit, who has access to this vehicle, who doesn’t have access to this vehicle, who should and who shouldn’t without my permission? By putting us back in the driver seat it allows us to ask important questions about digital technology, about our behavior online, offline, when creating it. Questions that we should have been asking a long time ago. Heidegger’s questions are great, and Hui brings us really good stuff too, but we need to start thinking about the ethics of digital technology, and the best way to move forward is to remember what we did to get in the driver seat, our responsibility and the potential outcomes from our time while there.