Sign In / Sign Out
Navigation for Entire University
- ASU Home
- My ASU
- Colleges and Schools
- Map and Locations
we have discovered for designing the future
(Image) 9/13/1965- Qui Nhon, South Vietnam - Guitar slung over his shoulder, a trooper of the United States 1st Calvary walks ashore from a landing craft.. Kyoichi Sawada -- Bettmann/CORBIS) 1
Knowing what part of a system may offer opportunities for action – and which parts don’t – is critical.
Many wish to pretend that the world isn’t complex; that they understand it; and that they can change it meaningfully to match their existing narratives. That’s dangerous hubris. It’s a way of avoiding reality while pretending that you’re the only moral one in the room, and everyone else is too stupid to be worthy of consideration.
See also 301. THE FIRST LAW OF DESIGNING THE FUTURE: ‘YOU BREAK IT, YOU BUY IT’; 901. BE A LEARNING ORGANIZATION; 408. PREPARE YOURSELF FOR SURPRISE; 302. BEFORE YOU LET IT OUT IN THE WILD, TEST AND RETEST HOW THE STREET WILL USE IT; 303. BE CONSCIOUS OF HOW YOUR WORK COULD BE USED FOR EVIL; 203. BE ITERATIVE; 403. ESTABLISH CONTEXT; 404. ESTABLISH YOUR CONSTRAINTS – E.G. YOUR RESOURCES, OR THE LAWS OF PHYSICS; 405. ESTABLISH HOW YOU MIGHT OBLITERATE YOUR CONSTRAINTS.
There is a much to be done in designing new futures, creating agile and adaptive institutions, transforming minds, focusing more on accurate perception and less on existing ideology. But “Just pick a future, then backcast, then voila!” Nah.
Put another way: during Nam, if you had a low draft number, you could go to Canada, become a conscientious objector, wait to be drafted, or tweak your future somewhat by enlisting and aiming for a specialty that would be least likely to get you killed. That was individual agency. However, when it came to the war itself which chewed up and spit out many – some in body bags – there were deep tides of history and stochastic chance which were a far greater challenge.
Make sure you know which is which.
You think you’re gonna stop AI? Fine. You go right ahead . . . milk and cookies is after naps. It isn’t until you perceive the inevitability of the AI race that you get to ask useful questions such as: what kinds of implications might such an evolution involve? Give me some scenarios? Help me think about it? Help me create meaningful soft law? Whose ethical structure will be embedded in the winning AI? (Hint: Vegas is 4 to 1 Confucian.)
Thus, for example, when the AI folks at Google in 2018 didn’t wish to help the U.S. government, they significantly increased the chances that the Chinese AI will inch ahead, then time cycle into quick dominance. That unintended result is precisely what happens when you don’t appreciate the system you’re in and hubris leads you to over-emphasize your real influence, and thus fail to understand where agency may really exist.
Agency is generally bounded. The most important thing about agency is understanding the environment well enough to know where those boundaries are. And thinking hard to estimate reasonably at least the first-order effects of one’s choices.
Agency comes from the Western philosophical dialog around free will. Free will, in turn, arose not as an observation of people in the real world, but as a religious principle that was required by Christian theology. Accordingly, the absolute existence or non-existence of agency is an interesting theological topic, but of little use in the real world. In the real world, agency once exercised is an effort to push complex systems in certain directions. Thus, whether or not agency was appropriate in a situation depends on the evolution of the system involved. Thus, continuing dialog with the system becomes a requirement for ethical displays of agency. Has the chosen action had the desired result? If not, act again to try to guide the system appropriately.
In complex adaptive systems, the stronger the intervention, the more unpredictable and perturbed the system may become. A respect for complexity means that minimal, iterative, repeated exercises of agency are preferable to attempts at large interventions. And they must always be part of a learning system.
If you’re not learning, you may be exercising agency, but it is arbitrary, unethical, and capricious. So you should stop.
This is not to say that designing the future is futile. To be sure, there are scenarios alleging humans will never again have any control over their destiny. In this view, the species simply can’t move fast enough to control anything. Technology is in the driver’s seat and we are merely along for the ride – possibly not for long. Steering is an illusion.
It’s a credible scenario. Only time will tell. But scenarios are not rock-hard predictions. And for better or worse, The Guide Project is a colossal bet in the opposite direction. After all, if a species has developed an undeniable pattern of making its own luck and defying chance for tens of thousands of years – so far purposefully coming up with ways to dodge astounding existential threats like the development of nuclear weapons – it’s important to examine exactly how we’ve prevailed. The stakes are too high not to give success a shot.
Be humble. You control a lot less than you think. With agency, as with medicine: first, do no harm. Yet – never give up. Realistic optimists are those who understand the game but are still in it.