Developments in ICT, robotics and artificial intelligence (AI) bring forward technologies that merge with our environments. Many technologies today also are not concrete devices, artefacts or tools that are being explicitly used. Some technologies are rather the context for human experiences and actions, while they function in the background. However, it appears a new generation of technologies will more radically feed into our material environments; making them more and more active. Similar to how technologies can merge with our physical body (e.g. cochlear implants, pacemakers, etc), also “hybrid” environments come into being as technologies and the world become fused. By virtue of different types of sensors, data analyses and extensive communication networks, all kinds of tasks and decision-making processes can be outsourced or delegated to concealed technologies. As such, our environments can perceive and act upon us. We have plenty experience already with how technologies have (and have had) great impacts on how we live our lives and how we perceive the world. Moreover, it is widely acknowledged that technologies change us, and even that technology is an intrinsic part of the human being. As technologically mediated beings we become who and what we are in interaction with our material environment. We – homo faber – make and use technologies which in turn shape our minds and bodies. But what happens when those technologies fade away from sight? Could the digital revolution and the development of smart environments require us to change our current human self-understanding? In this explorative paper I turn to the theories of Karl Marx and Hannah Arendt, who wrote in a time in which the technologies of today could hardly be imagined, but who addressed concerns still resonating in the debate of today. After briefly reflecting on the development of smart environments, I will discuss Marx’s ideas about the importance of work (and technology) for how we understand ourselves and Arendt’s critique on this thinking. Fears about deskilling and the replacement of human labour by machine labour dates back to the Industrial Revolution. However, it turns out that so far humans are still needed – created new jobs – and subsequently the automation did not exactly lead to a crisis of self-understanding. In light of current technological developments it is interesting to reflect on their theories again and give them an “update.” Doing so will help us to critically look at new and emerging technologies as well as at the comeback of worries about deskilling, loss of control and alienation.I Smart environmentsPhilosophy of technology has examined the influence of technology on our existence (actions) and how we experience the world (perceptions). Some authors – like Heidegger, Jaspers and Ellul – addressed “Technology” with a capital T; typically treating it as an abstract, monolithic force. Others – like Ihde and Verbeek – focus on concrete artefacts and conduct more differentiated and empirically informed analyses of technology. Ihde identified four ways in which technologies play a role in human-world relations. One of these is the so-called background relation. A typical example for this type of relation is the thermostat, which once properly installed and programmed does not need our attention while keeping the room at a constant temperature. All kinds of automatic and semi-automatic machines and devices work like this; we are only involved at the start or end, or for instance in case the system fails. However, Kiran convincingly shows that it is wrong to characterise such background relations as in terms of “intentional absence” because also in their state of non-use they have a specific type of presence. These background technologies have, according to Kiran, a much more active role in shaping how we see and understand our lifeworld than Ihde assigns it. Smart environments appear as a special type of background relation, as the technologies are not even meant to be just passively (mute/stable) in the “background.” Smart environments are supposed to make our environments flexible and adaptive using sensors and communication networks. In the process of embedding communication and information technologies in all kinds of devices, a network is being created of everyday objects that can communicate with each other; the Internet of Things (IoT). This evolution of the internet and the developments in computing and artificial intelligence promise to revolutionise daily human life. Where once, we adapted to our environment, we started to adapt our environment to us. Now it seems, we start to let our environment adapt itself to us. By adding the relation of “immersion” to Ihde’s typifications Verbeek aims to describe technologies that are more radically “environmental” or “ambient;” when the technology merges with the environment, and the technological background interacts actively with human beings. This new characterisation helps to understand the special kind of background relation we seem to be confronted with in smart environments. However, Verbeek seems to focus again on the effects of these new type of technologies in-use (acutality), instead of their significance as lifeworld phenomena. Kiran emphasises that “technological shaping of the lifeworld happens in terms of possible technical mediations, not just actual technical mediations.” Our (future) actions, Kiran says, are co-shaped by the technological possibilities we recognise in our lifeworld. Ultimately, he argues that to understand ourselves is to understand our possibilities. Interestingly, in a forthcoming article Aydin, González Woge, and Verbeek do acknowledge that the focus on things seems to be insufficient to understand the agency that arises from the merging of technologies with the environment. It would be striking if these discussions could lead to a conception of technology with a capital T again… For the purpose of this paper it is not helpful to elaborate on this in further detail. What I want to underline is that smart environments are not explicitly “used,” but at the same time are meant to “do” something; perceive us and work on us. As such, smart environments make an interesting category of human-technology-world relations that could be problematic for homo faber. Future smart environments might require less involvement of human beings in all kinds of activities and decision-making processes. Moreover, they could give unsolicited feedback on our behaviour. The type of automation and interaction that current technological developments make possible requires us to think about what it makes of us. What would it mean, for instance, if smart environments turn out to be fairer judges, provide more accurate medical diagnoses or make better police officers. In order to think about what it means to get rid of certain types of work as a result of smart environments, I will first elaborate on how Marx relates self-understanding to work. II Marx: Self-understanding related to workFor Marx work is what makes humans “human” and in his conception of work technology plays a crucial role. Bradley emphasises that Marx is not simply a thinker of being as work; in his theory of work tools play a central role. Homo faber can be translated as “man the maker” or the “working man,” but as Bradley’s discussion of Marx seems to suggest “man the maker of tools,” or “the tool-using animal” would be more accurate. In order to survive, the human is forced to develop artificial extensions that enable him to compensate for being maladjusted and unarmed. The human beings is a Mangelwesen, a deprived animal, as Gehlen described it. It is important to note, however, that Marx overcame the instrumental view on technology. As such he recognised that we do not only use tools to modify nature to our needs in order to maintain our existence (e.g. provides ourselves with the necessities of life), we are also shaped back by those tools. Tools, our technologies, are not neutral. As such, work is about transforming as well as about coming into being. Work shaped both our body and mind. As Engels argues it all began with the upright posture that freed the hand; the fee hand enabled tool use, and the tool use shaped the hand as it evolved and adapted over time. With new tools came “new ends, needs, instincts and ideas.” When Marx talks about objectification, he does not mean it negatively. When you work you make something, an object. Work is ultimately also about putting something of yourself into the end-product. Our ideas are embodied in the objects we make and as such we always materialise culture in our work. It is therefore also through the object that you come to know yourself. What we lost in the Industrial Revolution, according to Marx, is the connection between the human, the tool, and the end-product. Industrialisation (i.e. automation) made the human part of the machine. We become alienated when we do not know the products we produce. Under capitalism the product becomes everything and you become nothing. The factory worker works very hard, the money goes to the director, he expands the company, and the worker can continue to work but for less or get easily replaced by someone else as no specific skills are required anymore. When you make things as not for you (e.g. not under your control, you are not sure about what you are making, but only getting a salary in return), the work dehumanises you. What is worrisome furthermore is that not knowing the end-product also implies that you do not control what shapes you back. Maslow’s famous phrase “if all you have is a hammer, everything looks like a nail” also kind of captures this idea. The hammer shapes how you act in and perceive the world. When it is not your hammer, but someone else created it, this implies that someone else ultimately has power over your actions and perceptions. According to Marx capitalism will eventually lead to its own collapse. Instead of liberating human beings the automation in a capitalist society brings about forced labour. Two things happen at the same time: as technology produces its own power it reduces labour time (e.g. more work can be done in a shorter time, etc.) while at the same time we do not work less because labour time is the measure and source of wealth. Moreover, capital relations have not only subsumed labour, but also our culture and thereby all aspects of life. In other words, the worker-boss relationship structures society and according to Marx there is no freedom until that structure is gone. Imagine we are liberated from this dehumanising type of work, and technology would really take over all of it. If all that is left for human beings are hobbies, this would make a better society, so it seems, also to Marx. But would this then undermine the central value of work for what it means to be human – by making it irrelevant for existence? In order to answer the question “how important is work?” we need to get a better idea of what should be considered “work.” Not working does not mean doing nothing at all. We can engage in politics or philosophy, create art, make a nice wooden table. All these activities would not be necessary for our subsistence, because machines work the land, clean our homes and produce our furniture. It seems the value of work is only undermined if one has a narrow view on existence (e.g. as subsistence) or a narrow view of work (work as providing for subsistence). Possibly, getting rid of some types of work might not be problematic – maybe work can still be the interface between humans and nature. It will be helpful to take a closer look at Arendt’s ideas about this and elaborate on what kind of work actually can (and is desirable to) be eliminated.III Arendt: distinguishing labour from workArendt distinguishes three kinds of human activities: labour, work and action. Labour is characterised by necessity, it concerns all occupations that serve the needs for the maintenance of life. As such, labour is associated with what slaves do and Arendt refers to the “animal laborans” as a lower degree of human existence. Slaves are not free, but rather an extension to the master who can take care of food collection, building shelter, and other kinds of more cyclical tasks. Work is different as it breaks with the cyclical nature of consumption. The decisive criterion that differentiates work from labour is duration; the things produced by work are things meant to stay. Work is more connected to artefacts – they exist longer than the individual that produces them. Hereby work implies progress, whereas labour is just for immediate consumption. Arendt places homo faber at the level of work, and indicates it is more human than the animal laborans. Action – being engaged in politics – however, is even more free and truly human. Even though, intuitively it seems that we should automate labour (i.e. get rid of necessities, slavish labour) and it would be harder to automate work (i.e. because it requires a human to set its goal), Arendt argues that we cannot automate labour. Instruments or gadgets can only make labour easier, but something that does labour must be alive. We are always first of all dependent on the natural cycle. What happens in the Industrial Revolution, according to Arendt, is that what used to be work becomes more labour like. Basically this lines up with a critique on the consumption society. What we produce is used for a shorter and shorter time; it becomes cyclic. To quote Arendt: “Tools and instruments ease pain and effort and thereby change the modes in which the urgent necessity inherent in labor once was manifest to all. They do not change the necessity itself; they only serve to hide it from our senses. Something similar is true of labor’s products, which do not become more durable through abundance.” Arendt explains that in this laboring society life becomes meaningless. When there is only laboring and consuming there can be no true public realm. According to Arendt, Marx defines the human being as animal laborans and the goal of communism (the revolution) as freeing the human being from labour. And thereby aims to lay bare contradiction in his writings. However, it could be argued that Arendt’s interpretation of Marx is debatable. As highlighted by Bradley, tools are an important part of Marx’s conception of work and work is not just about self-preservation, but is also some sort of interface function or the connection between humans and nature. Marx worries about becoming estranged from the product and the act of production under capitalism also seem quite similar to Arendt’s worries about consumerism. However, Arendt’s distinction between labour (that does not allow for freedom) and work (that does require freedom about ends and goals) is ultimately important to connect “thinking” to freedom; not the animal laborans but the political animal is her starting point. Arendt acknowledges that work is a fundamental aspect of what it means to be human, but for her it is not the decisive criterion that defines us. IV A crisis of self-understanding?Both Marx and Arendt articulate concerns about our connectedness with technologies. The way we use technologies today – how closely connected we are with them, their soft impacts, and the intensity and dependency on technologies – could hardly have been anticipated and apprehended by Marx and Arendt in their times. In light of our current use of technologies, we might also no longer be convinced by the categorisation of Arendt. The use of SnapChat, for instance, would be a confusing type of technology for her. A lot of things we do today, neither relate to the cycle of life nor are meant to stay. Moreover, what we would consider “necessities of life” today, differ from those of a few decades ago. Internet, communication technologies and for instance data analyses are phenomena (most) generations living today cannot imagine living without anymore. It is something contingent; losing those technologies would not really mean losing something essential. However, they are closely intertwined with our lives and also inform our self-understanding. In addition, even though our technologies have been developed in a capitalist context, this does not mean necessarily that they are inauthentic. Nevertheless, also contemporary thinkers use the argumentative blueprint of Marx and revisit some old questions and concerns about job loss, deskilling and control. In a recent article John Danaher discusses technological unemployment, the rise of the robot within the political, legal and bureaucratic spheres, and the outsourcing of cognitive work even in the personal spheres, the domain of leisure. The combination of these trends, he argues, could compromise both the ability and willingness of humans to act in the world as responsible moral agents, and consequently could reduce them to moral patients. Adding to this line of reasoning, it could be that outsourcing responsibility might also lead to a situation in which human beings do not practice anymore with moral judgement (in general) and thereby might even lose a capacity. Moreover, it might not only concern moral judgement, if our intelligent environments can do an overall better job in making judgements. The concern of deskilling is critically examined by Shannon Vallor. She explains that the concept has been used to describe the way automation in the Industrial Revolution threatened skillsets of machinists, artisans and other highly trained workers. However the deskilling thesis has also come under considerable pressure as it is argued that machine automation also freed workers from mindless, repetitive tasks and thereby provided opportunities for reskilling or even upskilling. Nevertheless, the risk and ambiguity of deskilling deserves attention again as new technological developments put other types of jobs at risk of being taken over by “machines.” Vallor emphasises we have to deal with potential moral (instead of economic) deskilling and its effect on the development of practical wisdom “as doctors, nurses, pharmacists, teachers, social workers, artists, lawyers, and librarians have seen core aspects of their skillsets made redundant by ICTs.” Is it reasonable then to think that human nature changes due to these new technologies? What would it mean if in law or medical practices the capacity of making judgements indeed ceases to be important as algorithms can do a better job. Or if even politics becomes more technocratic; when it is no longer explicitly about values, but values are involved without people being able to make their own judgements. Even though we have always been technologically mediated beings, it might be the case that not all technologies have the same effect and therefore some deserve more attention in the story of humankind. Instead of sticking to a normative idea of what it means to be human like Marx and Arendt, we could also consider that what it means to be human is exactly to be able to ask this question. It is then not a descriptive question, but first of all about practical reason, judgement. Plessner explains that humans are ex-centric beings. Humans not only live and experience their lives, but also experience the experience of their live. A human, thereby, is a body, is in that body having an inner experience, and is also outside that body in terms of being a spectator of one’s own life. This ex-centric positionality allows us to question our identity. However, it is imaginable that smart environments obstruct this, or at least interfere with our self-understanding. As a social animal, human beings have the need to exchange opinions, experiences and stories. Anthropological views on human nature like the Narrativity Approach can account (e.g. with some sort of historical determinism argument) for the digital revolution. However there is reason to fear (or be pessimistic about) how we use the technologies and phenomena like the filter bubble. If all kinds of processes and arguments are being blackboxed and humans are just fed with the outcomes and statements, it is the case that we lose control over what shapes us. However, I can agree with Vallor as she makes explicit that we should avoid blaming market forces (i.e. capitalism), but rather focus on the cultural forces driving our technological societies. From her virtue-ethical approach, she advocates there is a role for philosophy, art, literature and politics to counter the idea that cultivating or preserving skills of moral judgments as outmoded skills. Even though the normative ideas of Marx and Arendt about what it means to be human and their worries about our connectedness with technologies are of great relevance in the debate today, it has become clear that their focus on work might be too narrow. Work (both in the sense of Arendt’s “labour” and Arendt’s “work”) is not the only way to feel connected with technology.