Hortus Semioticus 11 / 2023 – Interview 2
AN INTERVIEW WITH RAINE REVERE
Interviewed by
Eleni Alexandri
[PDF]
During the 15th World Congress of Semiotics, which took place in the summer of 2022 in Thessaloniki, I had the pleasure of meeting Raine Revere, over a lunch break. It didn’t take me long to see that she is a driven and ambitious individual who has a profound respect for the field of semiotics. In one of our discussions, she shared with me her vision of integrating her knowledge on programming and semiotics in order to design and launch a mobile application that, in addition to its practicality and usefulness, would stimulate the brain and offer a cognitive experience to the user. Since that time, I have wanted to have the chance to talk with Raine about her entrepreneurial spirit, her dedication and commitment to lifelong education, and the journey through which she became interested in semiotics.
With a bachelor’s degree in computer science, a master’s degree in contemplative psychotherapy and Buddhist psychology, and great familiarity with semiotics, Raine possesses an amalgamation of knowledge-fragments. This amalgamation will allow an intriguing dialogue regarding technology and the future, in relation to our field of study.
I would like to thank Raine for this interview and wish our readers an enjoyable read.
Interview
Eleni Alexandri: Would you like to start with an introduction of yourself, your educational background, and how you discovered semiotics?
Raine Revere: Yes, thanks! I began my career in the field of computer science before moving into a clinical psychology program. Though seemingly disparate fields, I have come to see them as part of the same overarching program of learning—a deep and cross-disciplinary study of mind. Through the context-free language of software, we can understand the “forced” meanings that arise within formal systems. Through the context-laden language of our human-scale lives, we can understand the hermeneutic meanings that arise from intersecting spheres of culture and self. After studying the nature of information and then the nature of self, it was no surprise that the study of meaning, qua semiotics, offered an ideal framework for understanding meaning in a way that accounts for first, second, and third-person realities.
I am currently developing concept mapping software that utilizes my diverse background, and is heavily influenced by my studies in semiotics. [Raine shares more about her current project and her approach to design below.]
EA: What would you say is the most fascinating thing about semiotics? Perhaps that one element or piece of information that immersed you into the field or something that you discovered along the way?
RR: Tough to pick one! I think for me it is the modally agnostic nature of semiotics. That is, the resistance towards “fixing” the ontology of meaning onto a single philosophical substrate. I believe it is the spirit of semiotics to recognize the ontologically multivariate and hypercomplex basis of meaning. Within this agnosticism (as opposed to fundamentalism), there is room for subfields to explore meaning within a given context or from a given set of conceptual premises that shape the kinds of answers that emerge.
EA: In a broad and general way, we can say that there is a tight link between semiotics and cybernetics, but also a deep connection between psychology and semiotics. On the other hand, equating a human to a machine that intakes feedback loops and carries different functionalities seemingly contradicts the aspect of the soul and the depths of the psychological world. Do you find any similarities and differences between these three fields of knowledge?
RR: Yes, there is a perplexing array of informational and interpretative forms of knowledge. Reconciling information and knowledge is not so easy. Practically, I see it as an issue of different types and levels of intelligence. What is interesting is the diversity of forms displayed, and it gives us a rich field of phenomena to study. Closed questions like “Is it intelligent?” can be supplanted by open questions of “What kind of intelligence?”, “How does it function?”, “What level of complexity?”, “Who is it intelligible to?”, and “What spheres of being are entailed?”.
In the field of AI, discussions on general intelligence have long had the tendency to ask binary ontological questions, as if waiting for the ghost to suddenly pop out of the machine. I think reductive questions that take the form of philosophical koans can stimulate interest, but cannot be expected to provide real answers about the nature of the subject. They only push us to go further. Astute observation and rigorous intellectual debate help reveal truth in forms commensurate with our current understanding of reality.
Going back to your question about different ways of looking at knowledge, I consider the concept of context to be a helpful through-line. Machines as such demonstrate isomorphisms through tight couplings in a context-free environment, and thus precipitate “information”. Increasing levels of context-dependence (deixis) reflect processes of knowledge that are more distributed across time and space, and thus depend on entities that dwell in a relevant context. Such indwelling is made possible by deep agent-environment coupling over time. Our diverse approaches to studying knowledge reflect the diverse types and levels of contextualization and embeddedness of cybernetic systems in their milieu.
EA: Would you say that your acquired knowledge of concepts, theories, and semiotic models helped you identify and discover, retrospectively, new layers or aspects of your previous studies?
RR: Yes, absolutely. Many fields of study come with a lot of ontological baggage. Computer science, and the data and information sciences in general, tend to reify information, decoupling it from its etiology. Semiotics was a much needed corrective for me to re-empower context, environment, and culture as the necessary ground for meaning. With today’s expansion of the data sciences and the foregrounding of AI, I am afraid that the situation is not getting better. Data-centric approaches are expanding into every field, and bringing ontological assumptions with them. We have been in a data-driven fever dream since the early 2000’s. To be clear, informationism is not a necessary corollary of these studies, but it is a legacy inherited from earlier generations of thinking in the computing fields. There is a pretty direct lineage from cybernetics to behaviorism to 1970’s era artificial intelligence. Only now are fields like cognitive science going beyond the “brain-as-computer” model of intelligence and meaning-making.
EA: How easy or tough is it to apply semiotics in combination with programming? I know that you are currently working on the creation of a mobile application, and your vision is indeed to integrate these two spheres in order to provide an efficient and practical but also stimulating cognitive experience. What can you share at this stage about your work?
RR: Since 2018, I have been designing and developing a piece of software that is intended to empower a user’s personal sensemaking process. That is, it provides an interactive medium to organize one’s thoughts, develop ideas, and refine conceptual structures. I envision the entire enterprise as a kind of applied semiotics project. Can digital technology increase the perspicuity of a user’s semiosphere? The aspiration is to help people become experts in their own sensemaking. When the semiotic relationships in which one is embedded suddenly become reflexive, it opens up so many possibilities for engaging with life with greater agency. I think this is something that will appeal to an increasingly meta-aware society.
In designing software that is in alignment with semiotics, clinical psychology, and 4E cognitive science, I have had to rethink many of the common design paradigms of Silicon Valley tech. So-called human-centered design gained popularity in the 2010’s, and has now branched off into various models that incorporate multicultural awareness, community accountability, and other reforms. Yet, I find these models lack vision. In their earnestness to increase engagement and decrease friction, they frequently resort to least-common-denominator design that demands nothing of the user and maintains the epistemic status quo. While I do ask how software can be as easy to use and intuitive as possible, I also ask: How can software help the user become more aware of their thought processes? How can it give them opportunities to exercise greater semiotic agency? What learning curves can I integrate into software that facilitate the development of cognitive skills? I think this commitment to the enrichment of the person is an important (and notably, non-consumerist) approach to software design.
Let me give a concrete example. Previously, I was building a habit tracking app. I wanted users to define the semantic landscape themselves, so I avoided labels or colors with pre-assigned meanings. They chose their own habits, chose the emoji to represent them, and even chose the color gradient that represented their progress. I did this by proffering iconic signifiers and intentionally withholding symbolic signifiers. Through the design, I allowed users to conceptualize habits and habit tracking in whatever way makes sense to them. They literally make sense to establish the norms and meanings of their habit formation experience. I am giving them more freedom, but also more responsibility. In this way, I hope to facilitate reflexivity with the user’s sign world.
In my current project, I create a similar void for the user to fill with meaning. Except this time it is a trans-hierarchical knowledge graph that they create and evolve. It is like a canvas on which they can paint their thoughts in words and narrative fragments, and then observe the semiotic relationships in visual form. My personal experience suggests that using the software itself increases one’s ability to work with language and meaning in more abstract and complex ways. The software has been designed as a mobile app to better integrate into everyday life—an accessible sensemaking companion that is always available to capture, integrate, and extend personal insights.
The software is currently in alpha stage, but a public beta will be coming soon.
EA: How do you imagine the next few years professionally and academically? Do you see yourself returning to academia and pursuing a degree in semiotics?
RR: This question has been torturing me since I finished graduate school! I love the academic community, and truly long to return to that intellectually satisfying environment. In the meantime, I am deeply committed to completing and releasing this software. While it is painstaking to delay my academic goals, I believe that the work I do now will pay off in terms of the new ideas and concepts it will spawn. I have a lot to write about. I see my work now as part of my larger calling to teach and expand human knowledge. Pursuing a degree in semiotics or a related field is definitely in the cards for me.
EA: Speaking about the future, it could be said that we are experiencing drastic changes and advancements in technology. With the last few months being dominated by the rise of artificial intelligence and their various models with different functionalities. Also, the escalated and drastic shifting towards Web3 (although its moment of ‘explosion’ could be traced to 2014, when Gavin Wood coined the term). What commentary do you have on this new technological era, and how would you premediate the near future? What do you think we should expect in the following years?
RR: Both artificial intelligence and Web3 are truly groundbreaking technologies. This might seem obvious, but there are still people that speak about them as if they are trends. I can say for certain that they are here to stay, even when the hype inevitably dies down. That means we have to learn to live with them. Artificial intelligence will completely reshape the realm of human productive output. Yes, it will take jobs. We should embrace it, if only because it is inevitable. We are all specialists now, because AI can do the generalist activities better. We are pushed to the fringe. Yet this allows us to put all our creative power into more nuanced activities, and leave a lot of the rote execution up to machines. However, we are still the ultimate generalists, because we are feeling subjects embedded in the lifeworld. Only we can engage philosophical and ethical questions. There is little use in being conservative towards technology (you will just end up on the wrong side of history), but there is great use in working to create a future in which technology supports a healthy ecosystem from the biosphere to the noosphere.
When the internet was first developed, the potential was obvious, but the scope of its true impact was unimaginable. People thought that being able to order pizza without leaving one’s home was a good example of the Internet’s potential. Today we have a similar situation with AI. Its power and potential exceed our creativity. An AI module in a word processor or search engine that gives us prompts for further ideas is only the most superficial application of the technology. Real usage will be more deeply integrated into society, shifting entire work streams. Think of AI as automation at a level of complexity never before achieved. Everything is now conceptual art; the medium and execution are secondary to the idea. The concept and the curation of implementations is where the work is. Story and narrative become even more important, because they are the only thing tethering technology to the human experience. With an infinite number of creative manifestations, the narrative that resonates is the one that rises above the noise.
EA: Do you think that semiotics could provide effective and efficient tools for the improvement of these technologies? For instance, could semiotics offer a solution to the problem of AI models in interpreting nuanced concepts and ambiguous words charged with subjective human values? What about the role of semiotics in the improvement of the Semantic Web?
RR: That’s a good question, although I may not be the best person to answer it. I am more interested in how semiotics can help us do things that AI cannot do. The more that AI is capable of, perhaps this is even the greater need. AI will get better and better at emulating human behavior and pattern matching human intention and creativity, though I would note that this does not constitute “understanding” or “interpretation” on the part of the machine. This new age requires an increased reverence for the intimacy and spontaneity of intelligence. It will become harder and harder to distinguish, yet that small difference will be more profound in its inimitability.
I think with the advent of powerful LLM’s, the traditional concept of the Semantic Web has become obsolete. We no longer need special semantic structures to help machines interoperate with human meaning. They can pattern match at the level of language itself, which turns the entire Web, semantic or not, into an API (application programming interface).
EA: Do you think that the machines might hold or potentially provide, in the future, some more clues that may aid us in our journey towards discovering ‘meaning’? On the one hand, machines hold a much-desired complete lack of subjectivity that humans can never achieve; on the other hand, they are still human-made programs whose design and training might entail some human biases.
RR: As automated signifier manipulation machines, digital technology highlights the informational aspect of sign play. As technology evolves and gains new territory, it shows us in ever-increasing granularity how signifiers can be manipulated to productive ends. “Meaning” becomes more differentiated. Plus, new technology generates new affordances, which enable new forms of meaning.
It is interesting that you refer to a lack of subjectivity as much desired! I might suggest a different way of looking at it. Objectivity, in the moral sense, is actually a heightened subjectivity that is aware of injustice. It has little to do with an objectivism that lacks subjectivity and thus lacks the ability to respect the lifeworld in contextually appropriate ways. In other words, bias is only bias when compared to a standard that is judged to be fair in culturally-specific ways. In Michael Polanyi’s language, we are the adjudicators of our own hypotheses of truth. Our flaws are in our subjectivity, yet so is our hope.
EA: Thank you for the interesting discussion. Is there anything you would like to add or share with our readers?
RR: Thanks to everyone who took the time to read this. I hope it stimulates your sign world, and I sincerely hope to connect with you at the next conference or social event! If you are interested in being included in the beta release of my software project, drop me a line at raine@cybersemics.org.