Coming soon

Daily, snackable writings to spur changes in thinking.

Building a blueprint for a better brain by tinkering with the code.

The SECOND illustrated book from Tinkered Thinking is now available!

SPIN CHESS

A Chess app from Tinkered Thinking featuring a variant of chess that bridges all skill levels!

REPAUSE

A meditation app is forthcoming. Stay Tuned.

THE MECHANICS OF DENIAL

June 11th, 2025

Someone was commenting on how wild politics is these days so I told them what happened between Hamilton and Burr and their jaw dropped. If you don't know, they had a duel with pistols, Hamilton missed, Burr hit Hamilton and he died. Imagine if that actually happened between two American politicians today. Modern politics is about as tame as a gossiping sewing circle compared to when the United States was founded.

 

The disconnect between a modern assessment of current politics and it's accuracy relative to politics as it's existed throughout all time has to do with an inability to keep things in perspective, in proportion. 

 

Our focus determines our reality, and if we focus narrowly on some current event and divorce it from all of history, then that object of focus has the entire spectrum of reaction applied to it, because there is nothing else to act as a counterweight.

 

This disease of narrow focus and recency bias makes people woefully untalented if not flat out incapable of assessing proportion. But what's the antidote? How does the inverse function?

 

First, another example: Cancellation in the last decade has meant losing a job and some digital public embarrassment.

 

Cancellation used to mean getting burned at the stake, the Spanish Inquisition, guillotines in France or getting sent to a gas chamber. 

 

If anything social networks may have greatly reduced the violent tendency of the censor-impulse in culture by making it digitally simulated instead of physically carried out. That censorship-impulse has been lurking within human culture forever (at least since we drove other humanoid species extinct several hundred thousand years ago) and now forums like Twitter and Facebook have functioned like a ghostbuster's trap, and captured that impulse in the digital space where it's physical impact is stunted. 

 

Instead of putting things into proportion by examining events within a larger context, those events become all consuming - perspective becomes very skewed.

 

To zoom out even more: have you ever heard any one say Not in my lifetime! This thought-terminating cliche is a favorite because it's so indicative of the calcified echo chamber that doubles as a personal shrine to one's own pride about the horse blinders they've constructed and proudly wear. When someone like this hears about some impending innovation and says "Not in my lifetime" I bite my tongue. It's futile to argue. One of these inevitable tomorrows will unveil their hasty judgement and I know by that time their slippery logic and feeble memory will have found some convenient way to completely forget those fatalistic words they'd uttered: Not in my lifetime. Instead, they'll complain about how said innovation doesn't work perfectly.

 

Again, its a matter of proportion, but its time that must be examined. The widespread mistake is to make judgements based on the present as a static snap shot - which is what most people do. Again it's a kind of recency bias mixed with an inability to zoom out and place events in a larger time line. 

 

Let's zoom way out: Think about the time between the agricultural revolution and the industrial revolution, compared to the time between the industrial revolution and the digital age? And you really think the time between the digital age and the next level of magic isn't going to occur in a far more contracted period of time? ...ok.

 

I've been thinking about denial quite a bit lately, and I've realized that it's seed, stem and root is far more subtle than they first appear. Willful ignorance seems to be at the heart of denial, but I think that's a contradiction. People are certainly capable of hypocrisy, but ignoring something you know isn't the same as being unable to envision the implications with enough visceral force to change behavior. I think in most cases denial is the result of a weak imagination. 

 

There's another software engineer in the family and I'm always shocked when we talk about tech and the future. He seems to be fully committed to the idea that his profession and career has a few more decades to fill out what he thinks will be a normal human lifespan. (His company is beginning to talk about incorporating Cursor into their workflow. Meanwhile I show him a couple full stack applications that I've built and launched within the last few months that are in production and being used across an entire company and his jaw drops) While I do worry about him and his family, all of whom I'm very close to, I realized that he simply lacks the imagination required to extrapolate the implications of recent innovations. I suppose this is maybe why not everyone writes sci-fi? Such implications seem to come naturally to me in daydreams. I invested in Tesla in 2016 because the advent of robotaxi seemed obvious after watching a lecture from Tony Seba about disruption technology. It was just a matter of....time. And time is the only reliable superpower for investing.

 

Imaginative extrapolation is again a matter of proportionate thinking. It's seeing today - not as a static snapshot - but as a vector, one that creates a ratio of yesterday:today:tomorrow. We always have two parts of that equation, and the more yesterdays we stack into it, the easier it is to solve for tomorrow. This is why the ratio of time between the agricultural revolution and the industrial revolution compared to the industrial revolution and the digital age is so important. The staggering contraction makes the implication clear: unless you're already on your deathbed, the future is definitely going to happen in your lifetime.







THE CRAYON QUESTION: CREATION IN THE AGE OF AI

November 21st, 2024

Why are refrigerators plastered with crayon drawings in the homes where there are young children? Are these drawings products for the parents and adults to consume? Perhaps. But phrasing it this way is a little ridiculous. It's infusing a situation that is somewhat devoid of capitalistic structures with the terminology of capitalism. So why do these crayon drawings exist? If the answer is obvious, keep it in mind.

 
There is much squabbling over AI art. Most of it can be safely ignored because almost all of it misses the point that should be the obvious answer to our crayon question. But the mere existence of this fussy, constipated and shit-slinging dispute is itself proof that parties involved are blind to the answers to similarly fundamental questions with simple answers.
 
Much of this squabble is rooted in anxiety over financial stability. 
 
If the computer can produce a better image than I can, and do it ten thousand times faster than I can, then how can I possibly make a living as a designer? Replace image with almost any form of creation that can be seen or read on a screen and the concern is the same across industries. As I understand it, the majority of people connected to Hollywood for their livelihood are very nervous about the future of their professions.
 
Infusing capitalism into areas of human activity that we deem "art" radically alters the conversation. It comes to bare almost no similarity to our Crayon Question. The constraints of life and "making a living" pollute the answer with a vast network of incentive structures that are not present for the child who is plying a crayon to the paper. The child is not thinking "if I don't make a good enough drawing, then mommy won't give me food in exchange for it." Ponder for a moment how utterly brutal and heart breaking it would be for a child to even conceive of this question. But this is essentially the question of adult artists, and the reality is that it makes one a "starving artist" simply due to the fact that most output is simply not deemed valuable enough - unless of course the artist "makes it" and becomes plugged into one of the systems of mass distribution, be it traditional publishing, or Hollywood - or rather Netflix, etc. 
 
One of the largest bottlenecks for the success of the starving artist is the amount of time and effort it takes to get good at something. Some people get lucky and grow up in a situation, and with the right random proclivities that they speed run this training period while "making a living" isn't yet a problem. This is rare, and rarely conscious: no one knows what they want to be when they are young, and many adults, remembering their own cluttered and haphazard upbringing will say to such people: you're lucky you knew what you wanted to do at such a young age. Such balance of intense proclivity with the accidental discipline that it creates is rare and this outlying situation doesn't really apply to the conversation.
 
Those striving in an artistic direction who were not lucky enough to train young have a far more difficult time because now a training period that does not produce anything that supports a living has to be balanced with actually making a living. Time is the essential resource and as more of it gets allocated to making a living, less and less of it is available for the training period, making the time this period requires to be much much longer. This weighs on the human psychology: progress is slower. Success feels further and further away, and the dream of "making it" often starts to feel more like a delusion than an actual, tangible possibility. Let's put it this way: if a parent said to a child "if you're crayon drawing isn't good enough, you don't get dinner." How many children would just give up right then and there and start crying? If you have any experience with children, you'll likely agree that the percentage is high. Very high. Again, this is essentially the psychological situation of your average "starving artist".
 
There ought to be a distinction made here about degree of creativity in a given production. Perhaps controversial, but is the creative engine involved in writing a novel script that is thought provoking and incredibly entertaining the same has the creative engine involved in the graphic design for the movie's poster? This is a weird and uncomfortable question. Uncomfortable because it forces beloved activities into a hierarchy that may imply that one is better and one is worse. The reality is this isn't the right question to ask, but it is a relevant question in terms of the fear of AI. There is a hierarchy of tasks for which AI is steadily climbing. The question is not to point out which creative "skill" is better or worse but to say that there is an order of which one will be subsumed by AI first.
 
This order of subsumption represents a spectrum of creativity, and at the end of this process of subsumption there is only one tiny piece of the spectrum of creativity that will remain. Let's consider a couple examples: traditionally a sound engineer would be tasked with removing dead space from a conversation. Having recorded and produced over a thousand podcast episodes myself, I'm well acquainted with this drudgery. I was exceedingly happy when this process became automated and I could get it done in a couple second instead of spending many many many minutes laboriously doing it myself. Compare this "creative" task which is on the low end of our spectrum of creativity (ie. it really doesn't require much creativity but it is part of this creative process) to the complete opposite: me sitting with a blank page and dealing with the cognitive situation of: I want to write a short story, what should I write? Or even better, how about this very essay you are reading. This morning while lifting weights I had some thoughts about creativity and AI that felt novel, and decided that I needed to explore the ideas. Now how does AI relate to this part of the creative process? Should I ask AI: Hey I have an idea for an essay about art in the age of AI and I think the title might be something like "The Crayon Question", can you write that essay in the style of Tinkered Thinking for me?
 
If anyone thinks this is a good idea then I'd like them to consider a couple analogous questions:
 
Hey, AI, I can you eat my food for me?
 
Hey AI, can you do my bench press for me?
 
 
Hopefully the point is obvious: Even if the AI is hooked up to some kind of robotic mouth where food can be physically placed and "eaten" this process is completely ridiculous because you'd fucking starve... since, you aren't actually eating the food. Or if the AI was hooked up to robotic actuators that could lift your bar, it's useless because it isn't your muscles that are using ATP to do it. The same applies to things AI can do which are truly creative. AI cannot run the neurological process in your mind necessary for producing something truly creative. It may be able to produce a similar outcome: but your brain will not change as a result of the process it takes to create it yourself.
 
When I had the thought that I would like to write this essay, I knew from years of experience writing over a thousand essays and short stories, that the experience would yield things that are simply not possible for an AI to accomplish. I know, and have known for a long time that the actual process of writing and essay or a short story is a process of discovery. This process doesn't just exercise my mind, it organizes and sharpens my thoughts. I get just as surprised by the next sentence as you do, because the reality is, I can't predict my next thought, I can only have  that next thought, write it down and then review it. AI can never replace that process, and that process is exactly what's going on when the child is plying crayon to paper. While the initial urge might be "I want to make a drawing for mummy" or it might be an after thought "I have a nice drawing, I'll give it to mummy," the literal action of creating the drawing is one of identical self-discovery. The child might have a topic or subject in mind just like I did while working out, but which line will be the 3rd one drawn or written? I have as little idea of that as the child does. Neither of us know until we actually get there, and it's the experience of the act and the changes it makes to us as a person which drives the behavior at a core level. 
 
Yes, this fundamentally core reason gets corrupted in a capitalistic framework. But it's entirely ignored in the current discourse because it unveils a very unsettling truth: much of the creative process in creative industries involves jobs that are the equivalent of color-by-numbers. This isn't to say there isn't skill involved. Sure there is, but it's not a skill which is unique
 
Unique is probably the only component of how people use language that grates my soul. Almost all misuse of language, I can understand and often appreciate: if someone understands what someone else is trying to communicate, then they are using language correctly, bad grammar and novel constructions be damned. But saying very unique is uniquely concerning, because the presence of an adverb to modify an adjective whose definition categorically excludes adverbs of this kind is to spout actual nonsense. Saying something is very unique is like saying that the color blue smells very century. Sorry what? Yes, exactly. A tangent on the word unique might seem uncalled for, but it's vital for a discussion of AI and its impact on artistic productions. AI might be able to produce incredible output that is commercially viable, but what it can never do is provide an artist with the unique experience of creatively exploring and discovering something new based on their unique perspective. The fact that AI can and will subvert the commercial viability of the final product misses the point of why art exists in the first place. We do art to engage in a highly personal process of exploration and cognitive development. The fact that we need to "make a living" is not a fundamental reason for making art; its a supremely inconvenient variable that pollutes incentives by linking our output to the procurement of basic necessities needed to operate a functioning human body in a tribalistically oriented society.
 
Anxiety ensues as AI ramps up to rob crafts people of tasks connected to creative activity. But why does AI have to come after the fun things? Why can't it do my dishes instead of replace me as a designer at work? Again we need to revisit the Order of Subsumption. AI can't do a janitor's job because AI doesn't have a body. AI currently only exists on a screen, and anything that can exist on a screen (writing, picture of a painting, etc) can be part of the training for AI. Now AI is starting to subsume digital tasks: agents, as they are called, which can  write and respond to emails - more color-by-numbers tasks that are far closer to drudgery than they are to true creative exploration. 
 
What the discourse on AI when it comes to art and jobs seems to lack is imagination that can extrapolate to definitive conclusions. The range of imagination on such topics seems to be like a weak lantern in a very dark field. They can imagine changes they can see, and that's it, and they regard those who can extrapolate to logical extremes as fanciful and unrealistic: Foomers and Doomers as they are termed. This is evidenced by the worker who is very anxious about AI taking their job, but doesn't really care or think about AI eliminating all of humanity, which one might think is the logical extrapolation of such an anxiously-focused person.
 
The most prevalent question in the discourse is: Well if AI takes all of our jobs, what are we going to do!? This question is further evidence of a total inability to imaginatively extrapolate. Your AI can't live your life for you. But it can and will replace you if you're doing a lot of monotonous work. But still, it can't live your life. And if your life garners the majority of its meaning from a job that is ultimately monotonous, then brace yourself for a very cold and very hard, spiritually infused slap in the face. I say this as someone who was laid off from a job that most people think is impervious to AI because I was replaced by an AI. I am not speaking from some protected pedestal claiming that everyone should eat cake. 
 
The human ability to adjust to new circumstances is, well, it's ridiculous, because it results in two entirely polar aspects of perspective. The assumption as a reader of this essay might anticipate me claiming that "we'll adapt!". While, eh, sure, but that's always true. I bring it up because of how fast and completely we resettle into new circumstances to such a degrees that we're blind to relative improvements. The best example of this is the Louis C.K. skit about being on a plane and hearing for the first time over the intercom that it was a Wifi enabled flight. The Wifi inevitably crashes within a few minutes because it's a brand new system and the guy next to him on the flight says "This is bullshit." To which Louis C.K. acts out a mock reply to this person: YOU ARE IN A METAL TUBE HURTLING THROUGH THE SKY AND YOU GOT INTERNET ACCESS. It's funny because the guy calling bullshit is so completely and thoroughly located in his current situation that he fails to realize how utterly incredible that situation is when you compare it to say.... a 14th century peasant who has worked the same field for 25 years straight. This ability to adjust to new circumstances is both a blessing and a curse. We adjust, which is often uncomfortable and sometimes painful and requires us to grow and change, but once the change is complete, we settle in, and we do so with an intensity of laziness that is ultimately debilitating - even crippling - to our perspective.  It often requires another forced change due to abruptly altered circumstances in order to shake our heads free from our own assholes that are so tight they deprive our brains of the oxygen necessary to power an imagination capable of having a novel thought.
 
Even after reading this, most people are still going to be unutterably chained to the question: Well if AI takes all of our jobs, what are we going to do? I'll respond to the question with a question:
 
Think about the 14th century peasant and Louis C.K.'s airplane companion who thought broken wifi was bullshit. In a couple centuries, or perhaps even in a couple decades, Louis C.K's airplane companion will be the new 14th century peasant. Imagine what needs to change about technology and society and people so that a guy complaining about wifi on a plane seems like a 14th century peasant. 
 
Instead of focusing on what AI might take away from you, invert the concern and think about all that AI might come to be able to do for you. Imagine if the entire food production system became controlled by AI, and it could even perform maintenance on itself in any physical capacity. Imagine billions of robots that could do all the tasks that we don't like but which we do because we have to. The costs for such a fully autonomous system eventually trend to zero, because all that's needed is the energy to run it, which will get for free from the sun. It's just a matter of getting the requisite atoms into the correct configuration. The beginning of this process will be very expensive, but as the process moves forward, it will take over its own construction. 
 
The same can and may happen to an industry like housing: imagine, instead of hiring a contractor who has to buy all the materials and extra labor, you have time to design the home yourself, you have the time to learn principles of design, to create a home in VR and walk around in it, make changes to it, study the principles of Christopher Alexander, and build a living space that is so well attuned to your personality and your family that it is itself a unique work of art. Robots show up with materials that have been harvested and generated by automated systems and these robots build your house in a matter of hours, or a couple days. And when you want a change, robots show up again and renovate. 
 
But who will pay for all of this?
 
Ultimately, the sun. But the start-up costs for getting this all going are huge, and rooted in dollars.
 
Utopia, is essentially, at its core, a coordinated set of automated systems that provides for humans in a way that parents provide for children, thus giving all humans the freedom to explore the way we endeavor to give children the freedom to explore and develop. If this seems fanciful and far fetched, please ask yourself how bright the lantern of your imagination is in this dark field of future unknowns. Certainly things could go wrong, be it nuclear war to blast us back to the Stone Age, or a paper-clip maximizer that turns the galaxy into a pile of paper clips.
 
The utility of discussing extremes is to try and induce some yoga on a mind that is too narrowly focused on local anxieties like: AI is going to take my job!
 
But what about now, and tomorrow? Instead of the next decade or next century. All of that utopia shit sounds great but how do I pay for groceries while I wait for heaven to materialize on earth?
 
My best advice is: Get weird. Embrace new technologies and try lots of things with it.
 
I have a good friend who is a film director and I peppered this friend with questions to try and get at the root of concerns around AI, the industry and making a living. The conclusive thought for this conversation was the realization that big studios with all the money control distribution, and it's this business & distribution issue that's at the core of the bottleneck for individual creatives who really shy away from thinking about "business stuff". I responded by saying: You could build a website, with payments and maybe subscriptions and just start making scrappy movies and put them on this website. It's not a global release in theatre chains, but it is global distribution! Youtube has a monopoly on this, and sure that could be part of the springboard. Perhaps release the first few minutes for free on YouTube and gain traction that way, but convert interested people to paying customers on your own website. Would this work? Maybe? The point is, it's never been easier and cheaper to run this experiment, and the sooner a person does it, the more time they'll have to grow it. 
 
Tinkered Thinking recently release www.printedthinking.com which is a Blog -> Book platform. I've had this idea for years, and realized a few months ago just how fast I could build and launch it now that AI effectively functions like a small team of software developers for me. Just yesterday I launched another product unrelated to Tinkered Thinking - an idea I had a few weeks ago which might be useful. I have a couple more ideas lined up which I plan to build and launch within the next few months. This speed of development and depth and range of experimentation was simply impossible a few years ago. Will these make money? Well Printed Thinking has paying customers. But the real answer is: you have to build it and launch it to find out. Same as any business.
 
The ground has started to shift under our feet, and it may dissovle into a veritable ocean, where many may drown if they are not quick enough to realize the change, stubbornly keeping their feet firmly planted where there no longer is any ground. Some are building their own sailboats and skiffs and some have some have Arks ready for the coming flood, and some will be quick enough to assemble rafts from driftwood in the swells. And while that all might seem terrifying, there's a good chance that after the initial flood we'll find floating islands in the wake, lush with a way of living that may even be incomprehensible from our current perspectives.
 
What is fundamental to understand is this: 
 
As employment opportunities contract due to technology, personal agency will expand due to technology.
 
How you uniquely use these new technologies will be completely up to you, and that's not something an AI could ever replace.







VASA SYNDROME

October 6th, 2024

The Vasa was an enormous and beautiful Swedish warship that sailed about 4,200 feet, and then sank. Building a ship, especially in the 1600's before the Industrial Revolution is no small feat. It requires a staggering amount of elbow grease, from cutting the trees down, to shaping the wood, to making the rope to nailing everything in place - even the nails had to be made by hand.

 
Apparently the thing wasn't designed correctly, which means it was designed differently from older designs that have proved to stand against the test of time. This isn't by any means an indictment against the new, rather it's a critique on how we explore the new in relation to our connections to the past. Loads of resources can safely be poured into a proven design. This doesn't mean that new designs shouldn't be explored, only that the resources we allocate to new designs should be proportional to the degree to which they have proven their worth. Though, even this doesn't seem correct. Many radical new innovations required enormous amounts of tinkering in order to get right. Thomas Edison for example is famously known to have gone through 10,000 iterations before he finally got the lightbulb correct: that's an enormous amount of resources poured into something completely unproven. But, that being said, with each iteration he did not make the largest possible lightbulb. So, it's not simply all or nothing when it comes to resources, but a matter of which resources we allocate heavily with and which we are sparing with.
 
A full sized ship is an enormous amount of wood. But a radical new design can likely be tested with a much smaller model if the ratios and proportions are correctly calculated. The Vasa was unbelievably unstable, with most of its weight in the upper structure of the hull, making it top-heavy. When a wind stronger than a breeze completely toppled the ship, it sank. One could have figured this out with a tiny model of the ship, and yet.
 
Many art projects (novels) and even start-ups - companies, can suffer from Vasa Syndrome. When founders raise unimaginable gobs of money for a product that can be prototyped and tested with customers on an incredibly slim budget, the practice seems more akin to building a Vasa. Why amass so much money and dedicate so much time on something that might not work? 
 
Let's compare the novel with Edison's lightbulb. Both take an enormous amount of time, ie, a huge amount of resources. But there's a crucial difference. Edison is getting feedback, the aspiring novelist is not. The naive novelist is much like the designer of the Vasa: imagining something radically new and envisioning it will be a triumph on the day it is finally launched into the world - only to find that no one wants to read the book and the few that do manage to crack it's pages find little to hold their attention. Edison is more like the short story writer, each iteration of the lightbulb a new little story. Each time he tries to turn on a given iteration of the lightbulb is like publishing a short story for all to see and read and give feedback on. Someone reads it and loves it and shares it? It's akin to the bulb flickering on briefly. A publisher reads a couple stories and offers a book contract? Well now that bulb is glowing brightly and steadily. 
 
Oddly, the hockey stick of exponentials is prevalent here. Whether that exponential goes up or down is dependent on how we go about our projects. A short story writer, or an inventor of a light bulb and see small gains with consistent feedback, and it seems linear - much like exponentials look in the early stage of the curve. But then, seemingly overnight, the effectiveness of the writer or the honed design of the lightbulb, turns on and takes off. 
 
Unlike the novelist who cloisters their effort from feedback or the founder who fails to acquire or interact with customers. The lack of feedback is creating a totally different kind of linear trend, one that leads to a total flop, and in the case of the Vasa, a literal flip - a sunk ship. 
 
The moral of the Vasa Syndrome is to seek consistent feedback. Don't work on the idea until it's perfect, let reality have it's say about how the design should evolve.







LINGUISTIC PACIFIER

September 11th, 2024

What do we say when we don't know what to say? 

 
There's a feckless panoply to pick from. Thought-Terminating Clichés reign supreme in this unproductive arena. It is what it is. That's life. So it goes. This too shall pass. It could be worse. Here we go again. It will all work out. There effect is emotive and mechanical. They give off an effect that something profound has been said. But the profundity is a ruse, an illusion created by a sense of being dumbfounded by an inability to respond. These sentences function mechanically like a punctuation, as in, a punctuation without a sentence. Even a question mark is impossible to respond to if there is no substance preceding the punctuation. Such linguistic implements are the equivalent of turning and walking away from a conversation. They not only fail to provide a means to further discussion, they emphatically kill the possibility. 
 
Defenders might squawk about intentions: there's good intention behind saying such things. It's a comfort to be told "It will all work out."
 
There's two problems with such buffoonery. Specifically regarding "It will all work out" the simple truth is that it certainly does not all work out. Every time someone says this to me, I point out that it did not work out for the malaria ridden child who just died of starvation in sub-Saharan Africa. This of course is, quite negative, and comes off as offensive because it's a backhand to the good intentions behind the person sputtering clichés. But perhaps a backhand is deserved in the now memed old-school-batman-comic slap way. Why? Because these linguistic pacifiers are a searing indication of cognitive laziness, which highlights the other problem with such buffoonery: 
 
 
Good intentions only matter so much. If good intentions consistently do not match actual outcomes, then good intentions become increasingly meaningless. If the disconnect between intentions and outcomes remains unchanged, then it's a sign that either this person is incapable of changing in response to the feedback from reality that they are not having the desired effect, or they simply do not care enough to dedicate the time and attention required to observe, understand, learn and change in adequate measure to resolve the insidious inequity. In short: a person is either too stupid or just doesn't care. Or worse: both. However, chances are it's only the later. Unproductive discussions about intelligence aside it's a robust fact of life that if someone cares about something, it generates a nearly inexhaustible well of energy to draw from in order to learn and understand: even the stupidest person can change when their heartstrings are sufficiently plucked by some unintended consequence. 
 
It's likely that the majority of language we employ is the result of habit. One need only wonder and ask: how can someone with many many decades behind them be such a bad communicator? Doesn't so much history force practice? Unfortunately the answer is no. The years require only a habitual way of communicating in order to get through all that time. Improvement only comes from conscientious practice, and most habit is unconscious automata. Most communication is a set of automatic linguistic patterns. After years of lukewarm communication, the rails of expression are more like ruts of habit. The consistent disparity between intention and outcome is not resolved, but it's manageable, at least in an emotional sense - the consequences are not so bad and they fail to bother heartstrings. Even the emotional fallout of poor communication can become just another part of the habitual pattern. Here we go again. And in these ways whole populations can spend many thousands of hours practicing without ever advancing beyond the s of a simple novice. 
 
 
The question at the beginning should now carry with it an appropriate amount of horror: What do we say when we don't know what to say? The consequences of how we each individually answer this question have tremendous and far reaching impact. The answer to this simple question may readily define the health of all our relationships. And if at this point, you, dear reader find yourself grasping in frustration: Well what are you supposed to say! If you don't know what to say, and nothing comes to mind and you have good intentions and you want to provide some comfort, what do you say!
 
There's one root issue weaving between, around and underneath this whole topic. It's silence. It makes us uncomfortable. I'll always remember asking my grandmother: why do you always have the T.V. on? Her answer was so candid I don't think she registered the magnitude of what it meant. She said something to the effect of "When Harry was dying I didn't want to think about anything, so I put the T.V. on so I didn't have to think about it, and then after he was gone, it was just comforting to have the sound."
 
Sound. That's it. Why is it quiet in libraries? Because people are trying to think. Sound, particularly human voices, hijacks thought. All these linguistic pacifiers merely fill the space drawing a compromise between communicating in a way that really doesn't help (and may even truly hurt) our relationships and staving off the horror of silence.
 
As Blaise Pascal once said "All of man's problems arise from his inability to sit quietly in a room alone."
 
I'd take it one step further and say many of man's shit relationships arise from his inability to think quietly in the presence of a loved one until something better to say comes to mind. 
 
The answer to that question: what to say when you don't know what to say, is to not speak, but sit with the issue. Allow your mind to explore the topic in a deeper and broader way. 
 
Often in conversation we are tailoring our own mind to try and see the point of the other - which is a good thing, one of the very best things. And so when a distressed loved one comes to the fraught terminus of their concern, we arrive with them at a confusing injunction.
 
But a good listener doesn't just follow the trail of their companion in conversation. A great listener understands that healthy conversation benefits most from a dynamic set of perspectives. 
 
I see where they are coming from. But what can I see that perhaps they haven't considered.
 
We've all had the experience of offering one or two points to consider and being immediately shot down. Again, it's the emotive aspect that is the problem. We feel shut down instead of realizing: gee, I'm talking to a relatively intelligent person who has clearly spent a LOT more time thinking about this than I have, should it really be a butt-hurt surprise that they've already considered the points I bring up?
 
Again, the answer is to use silence as a tool. It creates a surprising amount of space. Neil Gaiman, when questioned about how he thinks up all his ideas for stories has said: I just allow myself to be bored. After a while the mind begins producing ideas to entertain itself, and I just write them down. 
 
Sit with silence, sit with the issue, and, if you care, new ideas will arrive. But it's important to realize that they never arrive with the same alacrity as we expect. We've been habituated by linguistic patterns that responses are supposed to come at a quick interval, like a volley of tennis. Silence in tennis means the game is over. But the problem is that conversation, despite it's usual similarity to the back-and-forth of tennis is not tennis. 
 
Good conversation is chess. The main object of chess is to try and see something about the situation that your opponent hasn't realized. This is exactly what the object of conversation is when we don't know what to say. The answer is not to say the first innocuous thing that comes to mind. That would be like blundering a game of chess by moving any old piece just for the sake of hearing the sound of the piece hit the board when you place it on a new square. The answer is to sit with the conundrum in silence, to focus on it from many angles, to consider all of its parts and its possible directions. To work hard to try and find some aspect your companion in conversation has failed to see, something that might truly help that person you care deeply about.







MESSY

September 10th, 2024
"Cleanliness is next to Godliness" 
           
                - St. Thomas Aquinas
 
 
 
"A spotless home often has cluttered closets." 
 
                 - Tinkered Thinking
 
 
 
A perennial debate rages between the tidy and the disheveled. Steve Jobs was apparently famous for demanding beauty on the inside of the products at apple - not just the outside. Order is valorized and we seek to use it to tame nature, either by perfectly manicured lawns, or immaculate rows of spotless corn, or even in our own homes that present with the veneer of a museum. Even programmers are infected with this debate, with the loudest worshipping "clean code", as opposed to the derogatorily termed "spaghetti code". It's exactly like it sounds like, code that is intertwined with itself in countless, innumerable, and untraceable ways. Well, almost untraceable. 
 
There's a couple key distinctions that don't often enter the debate. One is that spaghetti code is worst when it's written by someone else. Clean code is necessary when working in teams. It's mostly about readability, and quick comprehension. In fact, clean code is a declaration that humans simply suck at understanding and following complexity. We don't have particularly powerful short term memories, and clean code is the answer to it - it's easier to understand so its quicker for someone new to the code to read it, understand it, and successfully make changes to it. 
 
Perhaps the most important distinction that never enters the conversation about clean code vs spaghetti code is whether the computer cares or not. The computer absolutely does not give a flying fuck whether or not the code is "clean" or now. The computer doesn't care period. It simply injects electricity through circuits that are arranged by the code. That electricity either successfully makes it through the maze, or it gets hung up, and crashes. In theory, a certain arrangement of spaghetti code might be much more efficient than human-readable "clean" code. So, which is better?
 
Depends on one's priorities. If one's work reputation is on the line, then leaving behind code that is incredibly difficult to deal with is not exactly something to aspire to. Hence why so many valorize cleanliness. They have very clear incentives for such. They don't want to put up with more spaghetti code.
 
But what if you're working on your own? Well this is a totally different scenario, and while it's a common humorous meme to liken old code one has written to hieroglyphics, moving fast and making a mess has its benefits. Those who toss aside concerns about clean code have different priorities because they have different incentives. A solo hacker trying to build a small software business cares about one thing above all: does it work for the customer? The customer is a bit like the computer in this respect. The customer doesn't give a flying fuck how pretty the code is, the customer just cares if the product works or not, because, naturally, they're trying to use it for some specific useful end.
 
Looking at other solo creatives we often see something far different than the lifeless museum organization. Albert Einstein's desk when he died was famously a disaster. (Google and image of it.) Or, pull up a picture of the complete human circulatory system - to highlight the o.g. creator - and ask whether it looks like clean code. It literally looks like spaghetti molded into the shape of a human.
 
So what's the deal with this debate? Tidiness is mostly a form of communication to other people, and it's due to the fact that our oh-so-powerful brains are actually quite allergic to complexity. We interpret what we don't understand as chaos, so we seek to make the chaos orderly, and often, as a result we drain the magic that was once contained within. there is of course subtler forms of organization. Things like permaculture, for example, which seek to strike - not just a balance between the chaos of nature and the order we humans desire; but a true symbiosis that results in a greater result than can be achieved by either rampant "untamed" chaos and the deathly museum-like order. Such virtuous cycles require a different understanding, one that doesn't eschew chaos, but seeks to understand it without destroying it, and by doing so, glimpse untapped leverage hidden within reality.