Bridging Rigid Logics with Blurry Imaginations

The tension between ‘what is’ and ‘what can be’ is omnipresent in technological design. The ‘what can be’ side of the tension further striates into what ‘ought be’ and ‘what can afford to be’ in an industrial economic setting. To me, nowhere is the tension between ‘what is’ and ‘what can be’ more apparent than with digital computers. These devices are substrates for logical operations, and as increasingly diverse communities of people have integrated them into their practices we see a flowering of implementations in software. Yet the initial boundary conditions of the history of computing powerfully shape what it is – where computing has been is the ground for us as we stretch to search for what it can be.

“The devices and systems of technology are not natural phenomena but the products of human design, that is, they are the result of matching available means to desired ends at acceptable cost. The available means ultimately do rest on natural laws, which define the possibilities and limits of the technology. But desired ends and acceptable costs are matters of society.” (Mahoney, 122)

So far ‘desired ends’ of the computational society have been seeded with industrial concerns and perspectives.

“the computer industry was, more than anything else, a continuation of the pre-1945 office equipment industry and in particular of the punched card machine industry.” (Mahoney, 126; quoting Haigh) “But making it universal, or general purpose, also made it indeterminate. Capable of calculating any logical function, it could become anything but was in itself nothing (well, as designed, it could always do arithmetic).” (Mahoney, 123)

Thus, in the 1970s, humanist artists began wading into computation, and we have witnessed an explosion of ‘high level’ creativity as to what the metamedium of ‘computation’ can actually do for us as meaning-makers. Ideas flourished that saw the computer as not just a machine for counting, but a substrate for human imagination. Yet the histories of computing set the devices we compute with on a path that has shaped its form: a device with baked-in logics that we recombine. The histories of computing feature engineering, science and data analysis as the kernel of the computer’s unfolding into the wider sociotechnical ecosystem. Art was tacked on later as an affordance of having enough 1/0s to spare. Computer programs are precise manipulations of the state of an electro-atomic system we call a computer. Yet human language too manipulates other electro-atomic systems (aka, other humans) in a much more blurry and imprecise way - yet this blurriness leaves room for emergence, and this I think is the key to the future direction of computing itself.

I am struck more and more each day by the 20th century origins of computing, and harden my resolve to lean more and more into what the 21st century of computing looks like. The future will see the “front” and “back” of computation merge into a holistic loop where generative logics allow computers to learn as they are used. The loops in our minds will be further augmented by loops through machines that begin to not just manipulate saved libraries, but increasingly generate new forms. We are, I think, at a profound crossroads in the path: will computing be continually defined by linear “processing”, or can we move it toward continuous relational inference? I think we must move to the latter, for the affordances of the future will enable and demand new human-scale ways to program computers. We are in the midst of a latent programming revolution.

This thinking has been culminating for me with the input of this class and my continued experience with the Microsoft Surface. The Surface device that I am typing this on is perhaps the perfect symbol for the crossroads that personal computing is currently in. The Surface has two distinct interface modes: the touchscreen/pen digitizer, and the keyboard. The mouse is unified with the digitizer pen decently well, but the keyboard remains a realm unto itself.

I am finding it increasingly jarring to coexist in free-flowing writing inside of digital inking applications and interfacing with programming.

To this day when writing to a computer at the level of its logical comprehension we are forced to bring our hands together and cramp over an un-changing keyboard. We input 1/0 commands into the machine through keys that correspond to symbols, which in sequence will (when interpreted) illicit the electrical state of the computer to evolve step by step as fast as the system clock allows.

The more I use a pen on a grid, the more I believe that there is potentially another way to program.

The work of von Neumann and others who pioneered the study of cellular automata has shown me that computing does not have to be about direct control using predefined symbol sets, but rather can be about boundary conditions and evolution.

I wonder if we cannot use digitizer grids and pens to allow human operators to sketch with computers. Already much of the power of the computer comes to us via adding abstraction. To edit a photo with machine code directly would be impossibly tedious, but thanks to many layers of abstraction I can use a tool like photoshop to move around thousands of pixels and billions of transistors in large strokes.

Programming languages have been path dependent upon 20th century paradigms. To me, programming a digital computer feels like playing with a near-infinite movable type: there are libraries of modules that I arrange in patterns to produce sequences which instruct the machine and can even mean something to a person.

Yet I wonder, is that the only way to program computers? Must we only use rigid pre-delineated symbols?

I think we can begin to write higher level programming environments that allow us to write to our computers, not type, but actually write.

I discovered a groundbreaking paper recently which shows that a unification between the way humans reason and the way computers process might be increasingly possible and fruitful.

Researchers Lake, Salakutdinov and Tenenbaum instantiated a “machine learning” concept by creating a “Bayesian program learning (BPL) framework, capable of learning a large class of visual concepts from just a single example and generalizing in ways that are mostly indistinguishable from people.” Using digital inking they developed a technique to parse drawn symbols via vector and temporal relational information and allow the computer to generate further symbols from these inputs.

“Concepts are represented as simple probabilistic programs—that is, probabilistic generative models expressed as structured procedures in an abstract description language.” Their framework brings together compositionality, causality and learning to learn. “As programs, rich concepts can be built ‘compositionally’ from simpler primitives. Their probabilistic semantics handle noise and support creative generalizations in a procedural form that (unlike other probabilistic models) naturally captures the abstract “causal” structure of the real-world processes that produce examples of a category.”

“Learning proceeds by constructing programs that best explain the observations under a Bayesian criterion, and the model “learns to learn” (23, 24) by developing hierarchical priors that allow previous experience with related concepts to ease learning of new concepts (25, 26). These priors represent a learned inductive bias (27) that abstracts the key regularities and dimensions of variation holding across both types of concepts and across instances (or tokens) of a concept in a given domain.”

“In short, BPL can construct new programs by reusing the pieces of existing ones, capturing the causal and compositional properties of real-world generative processes operating on multiple scales.”

Finding this paper feels profound to me. Lake et al have been able to create a learning system that does not need huge amounts of data, but rather using smaller stochastic programs to represent concepts and building them compositionally from parts, subparts and spatial/temporal relations.

BPL is a generative model for generative models.

The BPL approach gets us away from the traditional histories of computing with their emphasis on large datasets and toward smaller evolutionary rules-based generative computing.

Using the BPL method, concepts are represented as probalistic relational programs, so anything entered by the human operator (or theoretically by other BPL-taught machines) becomes instantly absorbed into a formal logic and is combinatorial at a mathematically grounded and sound level.

The key of BPL is that, like human beings, it allows the computer to start working on relational categorization after just one example. This is how “machine learning” can go from tool of the corporation toward tool of the individual. We individuals do not have thousands or millions of datapoints to give to our personal computers, but we do have individual ideas that we can sketch to them.

I truly think that computer science is going through a revolution in understanding: no longer will computing be about “business machines” and cracking cyphercodes and massive datasets, but instead will increasingly feature generative creative inference and blurry conversation.

The BPL approach, if embedded into the OS of modern personal computing could enable humans to converse with designed emergent libraries of recombinatorial mathematical artifacts. BPL is much more “as we may think” than any of the ‘neural net’ approaches that require astronomically large datasets and vast number crunching. Programming can evolve from reading “tapes” with rigid logics into sketching blurry ideas and creating relational inferences. This is not a replacement, but rather a welcome addition. The BPL approach is still “grounded” in piles of 1/0, but the way that BPL structures the 1/0s is much more modular and inherently combinatorial than previous approaches (from my limited perspective at least).

I think this approach is a keystone I have been seeking to merge ‘symbols that mean’ with ‘symbols that do’ into a unified mathematically complete “metasymbology” that will allow us to merge programming with language. Going further, the authors (and I) see no limits to using a BPL style approach to allow computers to engage with all forms of human symbolism, from language to gestures to dance moves. Even engineered devices and natural complexity, all the way to abstract knowledge such as natural number, natural language semantics and intuitive physical theories. (Lake et al, 1337)

In their history computers have been substrates for enacting human logic, moving forward computers will also become ever better substrates for enacting human dreams.

--

Sources:

Michael S. Mahoney, "The Histories of Computing(s)." Interdisciplinary Science Reviews 30, no. 2 (June 2005).

Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum “Human-level concept learning through probabilistic program induction” https://www.cs.cmu.edu/~rsalakhu/papers/LakeEtAl2015Science.pdf

 

originally for Georgetown CCT class CCTP-820: Leading by Design – Principles of Technical and Social Systems 

Winds of Imagination, Waves of Change

A philosophical and metaphorical musing on looking to the horizon of the future.

This universe is full of waves. Even particles are waves at their deeper more meaningful quantum level. So it is with change. 

Think about standing in an ocean. There are small waves you can make, ripples that radiate from you effecting limited space, but still impacting the whole system even if just slightly. Then there are the grand waves, the ones that come from out beyond the horizon. These grand waves are fueled by wind. In the world of humans it is much the same, although now metaphorical. Humanity generates a powerful wind of imagination and desire. The waves of change are fueled by our combined imagination, under no one's sole control, yet effecting all. 

Now that we humans are networked together directly this wind is growing in strength, and the waves it is making are growing larger than ever before. These cannot be controlled, but they can be noticed. 

The oceans taught me this. If you wish to catch a wave, first you must see it coming. You may not know how high it will be, or exactly where it will break until it is just in front of you; but if you have been watching the wave arrive it will not surprise you. You will be ready, and if you place yourself in front of it just so, you will ride it. 

Yet you must be vigilant. If you do not notice the wave until it is upon you, it will crash over your head. If you rush too far out to meet a wave it will pass by and break behind you leaving you stranded far from shore. If you catch a wave too late, only just as it is cresting, you will fall off its crest and smash into the trough below. You may try to duck under the waves as they come, preferring to let them pass you by but this is a losing battle; your breath will tire eventually and the waves are relentless. Change, like an ocean wave, can be immensely destructively powerful. But if you pay attention, you can ride the change.

This is how I look at the horizon of the future. I do not look to the future to predict specifics, I look to identify the bumps on the surface of possibility, the tell-tale signs that a wave of change is available to happen. 

WAVE-horizonJH.jpg

This is the work of foresight, of analyzing trends: to see the wave of change before it breaks, to follow its latent potential all the way until it finally rears up and reshapes the world, and to share that knowledge with others so that we may all ride together. Change comes inexorably, it is a constant of the universe. Waves pulverize shells to sand and mold lumpen stones into spheres. Life constantly evolves. Stars are born live and die. The galaxies spin and collide and coalesce. All matter and energy is seeking all that is allowed in the universe, spinning out in an elegant dance of radiant permutation. Whatever is possible has a chance of happening. 

The system of humans on earth is not some amusement park wave generator, we cannot simply “turn off” the waves of change we make. The change we bring is fueled by that endless growing wind of imagination and desire, of questions asked and answers sought. We are a part of the system that is the universe and so are enveloped in its constant of change. We are the universe expressing possibilities, we are a mechanism by which it changes itself, changes ourselves. We are a seething, roaring ocean.

In this moment, there are monumental waves on civilization’s horizon, ones grander than we have ever seen before. Waves of change fueled by winds of recombinant networked imagination, the strongest winds society has ever generated. The crests will keep getting higher, to ride the waves will become ever more perilous. From fire to nuclear weapons the potential to fall off the crest has been growing with the height of the fall. Synthetic biology, programmable matter, quantum computing, these are waves so immense that they are only now finding land, only now starting to rise up and show us their true height. Even the grand waves of traditional computing and digital networking are still rearing up, still not fully washed over the world. Yet already we see what they have brought. Some people have ridden them well, many more have been left behind.

We cannot fight change; to fight change is to fight nature. But we can ride it. We can harness change so that it may be useful, so that it may propel us forward. If we teach each other how to, we can all ride together; and if we catch the waves of change just right, it can even be fun.

- JH