New Book Review: "The Threshold"
New book review for The Threshold: Leading in the Age of AI, by Nick Chatrath, Diversion Books, 2023:
Copy provided by Amazon.
In the introductory pages of this book, Chatrath states that "this book functions as a first foray into exploring what good leadership looks like in the present and coming ages of AI. John McCarthy defined AI as the science and engineering of making intelligent machines. I hope to expand your concept of what it is to be a human leader in a time that challenges our former ways of being with technology. Where technologists start with the question 'What can we automate?' and ethicists start with the question 'Should we automate this?', I start with the leadership question 'How can leaders promote flourishing as technology advances?' This book therefore equips you to forge a new synthesis between machines and humanity in the face of future technology-related disruptions. This new synthesis, which I have termed 'threshold leadership', is the subject of this book."
The author later states that "Things that ideally should not happen unfortunately happen all the time in organizations. Is there an individual who thinks, 'Oh my, I should not have done this'? Sadly, I have found this ownership of error to be rare. Software product developers are often so removed from the consequences of their work that many don't even think of themselves as leaders who have a disproportionate impact on the world. As one senior leader in a large tech company recently told me, 'For all I know, I might have been in a situation where I contributed to an AI algorithm written wrongly,' Looking down she added quietly, 'Do I even know?' This is not about throwing software developers under the bus. The reality is that software developers don't have any say in, or any perspective about, what they are building. They often work on a very small slice of a bigger vision."
Chatrath mentions that threshold leadership is actually a fifth metaphore that follows the four discussed in a 2014 book by Frederick Laloux called "Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage in Human Consciousness": "wolf pack", "army", "machine", and "family". The wolf pack leadership approach is typified by executives who disregard others, disconnecting their thinking from their full humanity, and prioritizing short-term personal success and wealth. In contrast, leaders at the army stage of leadership often slump into superficial, derivative thinking who cling to norms too tightly, instead of bringing their whole selves to their thinking. The behavioral therapy that provided support for the army leadership approach also paved the way for the machine stage, in which leaders usually live in the future, hardly ever making it back to the present, pursuing growth just for the sake of growth. Interestingly, Laloux likened this pursuit, or condition, to be akin to, using medical terminology, a "cancer". The family leadership approach, however, views formal structure to be less important than values and inspirational purpose, with leaders comfortably drawing selectively on the benefits of earlier stages.
The author comments that "in the recent past fewer than 10 percent of leaders have progressed beyond machinelike approaches to leadership", and that "each step leaders take – from wolf pack to army to machine to family to threshold – moves them further from merely cognitively led leadership models and onward to embodied thinking, thinking-as-feeling, and thinking with all you have got." According to the author, "threshold leaders will contribute most and will be most satisfied in an era where distinctions between humans and machines disappear", and that "the new story is of connecting thinking and being via four pathways: cultivating stillness, thinking independently, embodying intelligence, and maturing consciousness", each topic of which is dedicated to a separate section of this book, with three chapters apiece.
Based on the dog ears that I created over the course of reading through this book, I found the first two sections on cultivating stillness and thinking independently to be more interesting than the latter two sections, with the first chapter (entitled "Coding Error") in the second section (entitled "Thinking Independently") the most dog eared relative to all the other chapters. Chatrath summarizes this chapter by stating that it "investigates why human independent thinking matters as AI improves", considering "three interruptive thinking disrupters, before starting to explore what independent thinking looks like practically." It makes sense why this particular section chapter might stand out for me, considering my software engineering career which thrived working as a consultant for a wide variety of firms from startups to multi-billion dollar corporations.
The three interruptive thinking disrupters that the author covers were taken from a group of fourteen identified by Nancy Kline and her colleagues. While Chatrath fails to mention where readers can find out more about the complete list, it appears that these interruptive thinking disrupters are covered in a book by Kline entitled "The Promise that Changes Everything: I Won't Interrupt You." In his discussion on persuasion, the author mentions in closing that threshold as profound truth means at least three things: (1) "being at ease with, even seeking out, the sometimes contradictory contribution of different disciplines, (2) inviting others into the conversation, regardless of how they currently relate to belief or faith, and (3) nurturing openness and provisionality as to what latest AI and other scientific experiments show. And in his discussion on this third aspect of threshold as profound truth, he states that "this provisionality is the heartbeat of science, as later discoveries frequently disprove or reshape earlier 'certainties.' "
A mantra many heard earlier this decade is to "just follow the science", but the mantra "just follow the scientists" is probably more accurate because research, and opinions based on research, varies over time, keeping in mind the challenges in determining whether research has really been performed and what this research looks like on the ground. Quite some time ago, I recall an article in The Economist magazine stating that computer science really lies at the base of all the other sciences. While I realize this may have been a controversial article, especially because most sciences came before computer science, and many would argue that computer science is not truly a science, I know as a software engineer that the application or data engineering code I write will process data, that the inputs and outputs are based on rules, and that tests can be written to determine whether these inputs and outputs meet expectations, but with use of AI (including all its subsets, such as ML), there aren't hard rules like those that exist with application or data engineering code, because models take the place of rules.
While I've never watched the Black Mirror series before because I don't watch much television, Chatrath mentions that the Black Museum episode explored the assumption that a person's value is no more than what is reducible to equations. "This assumption is similar to the assumption that 'you are just your brain' – or, similarly, 'your value lies 'only' in your brain. In the context of advancing technology, these assumptions arise frequently and persuasively. The assumptions matter since, in the words of essayist and novelist Marilynne Robinson, 'whoever controls the definition of the mind controls the definition of humankind itself.' " The author later comments that "the assumption that you are just your brain often rests on a philosophical view known as materialism, which is the view that the observable physical world is all that exists. The word 'observable' is key. Materialism can also include the belief that science is the only way to truth, a belief known as scientism. A significant number of scientists and philosophers think that scientism omits vital parts of reality."
After covering the inevitable that "all scientists, whether neuroscientists or computer scientists, have belief or faith", and that "it is important to consider what is persuasion and what is established fact", Chatrath in his closing statements on persuasion that professor Frank Wilczek, recipient of the 2004 Nobel Prize in Physics, accepts the fact that no current AI research enables AI to get close to two "big advantages" that human have over AI (connectivity and interactive development), but unfortunately takes a leap of faith in the following sequence: (1) "human mind emerges from matter", (2) "matter is what physics says it is", (3) "therefore, the human mind emerges from physical processes we understand and can reproduce artificially", and (4) "therefore, natural intelligence is a special case of artificial intelligence." While the author explains that these first two statements are "beliefs or worldview statements, not testable scientific hypotheses", however, he doesn't explicitly address these second two statements. Perhaps this is the case because these first two are foundational to the other two, but it's important that people understand why Star Trek featured human transporters will never be built.
The author repeatedly argues for the "practicality" of what he seeks to lay out in this book, but unfortunately, I don't think he ever achieves this goal. The "threshold resources" sections he provides at the end of every chapter were written with good intentions, but I don't find these to be very inspirational, mainly because these are written very dryly: largely a series of bullet points, with no diagrams. I received this book from Amazon a week or so prior to publication date, and while ChatGPT had already become a short-lived phenomena by late-2022, the output of these sections feel at first glance to be generated by ChatGPT. But the assembly of words Chatrath has written is novel, not to be replicated by current day AI. As a takeaway for the author, perhaps he could expand on these end-of-chapter sections to form a companion guide to this book.