Fulcra
By RYAN J. A. MURPHY
I have a foreboding of an America in my children’s or grandchildren’s time—when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing, back into superstition and darkness…
Carl Sagan, as quoted by @Andromeda321 in this interesting Reddit thread on the regretful trends of the 2010s.
The thread discusses the growth of anti-intellectualism and conspiracy theories. I’m reminded of this timeless Medium post about how hating Ross in Friends became a meme in and of itself, reinforcing the persecution of science in the ’90s. From David Hopkins:
I want to discuss a popular TV show my wife and I have been binge-watching on Netflix. It’s the story of a family man, a man of science, a genius who fell in with the wrong crowd. He slowly descends into madness and desperation, led by his own egotism. With one mishap after another, he becomes a monster. I’m talking, of course, about Friends and its tragic hero, Ross Geller.
[…]
If you remember the 1990s and early 2000s, and you lived near a television set, then you remember Friends. Friends was the Thursday night primetime, “must-see-TV” event that featured the most likable ensemble ever assembled by a casting agent: all young, all middle class, all white, all straight, all attractive (but approachable), all morally and politically bland, and all equipped with easily digestible personas. Joey is the goofball. Chandler is the sarcastic one. Monica is obsessive-compulsive. Phoebe is the hippie. Rachel, hell, I don’t know, Rachel likes to shop. Then there was Ross. Ross was the intellectual and the romantic.
Eventually, the Friends audience — roughly 52.5 million people — turned on Ross. But the characters of the show were pitted against him from the beginning (consider episode 1, when Joey says of Ross: “This guy says hello, I wanna kill myself.”) In fact, any time Ross would say anything — about his interests, his studies, his ideas — whenever he was mid-sentence, one of his “friends” was sure to groan and say how boring Ross was, how stupid it is to be smart, and that nobody cares. Cue the laughter of the live studio audience. This gag went on, pretty much every episode, for 10 seasons. Can you blame Ross for going crazy?
People in the Reddit thread point out that these seemingly recent trends have been taking root for a long time. While this is true, it’s also true that (just like seemingly everything else) these phenomena have been moving much faster and growing much larger in recent years. Which leads to a curious tangent: how do accelerated scales of change play on our biases? Does the interaction between these biases and our accelerated experiences change our perception of the world?
The over- and misuse of AI is one of my biggest tech pet peeves. It truly is evil to tack the AI term onto the description of most products. It also damages the long-term potential of AI by corrupting what it means—especially for the everyday people who aren’t involved or invested in building these tools, but who will use them (or be used by them).
Arvind Narayanan on Twitter:
Much of what’s being sold as “AI” today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides: https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
Key point #1: AI is an umbrella term for a set of loosely related technologies. Some of those technologies have made genuine, remarkable, and widely-publicized progress recently. But companies exploit public confusion by slapping the “AI” label on whatever they’re selling.
Key point #2: Many dubious applications of AI involve predicting social outcomes: who will succeed at a job, which kids will drop out, etc. We can’t predict the future — that should be common sense. But we seem to have decided to suspend common sense when “AI” is involved.
Key point #3: transparent, manual scoring rules for risk prediction can be a good thing! Traffic violators get points on their licenses and those who accumulate too many points are deemed too risky to drive. In contrast, using “AI” to suspend people’s licenses would be dystopian.
If academia ceases to have an impact it loses its raison d’être. Impact is what differentiates meaningful academic work from mere busywork. It makes the difference between signal and noise.
[…]
Ultimately, the questions that concerns us [are] what role research plays in society and how we can create a research system with impact at its core?
Indeed. We have to be asking (and answering!) questions that matter.
I like this project. Benedikt and Sascha say they’re taking a systemic approach to model the full complexity of academic impact:
academia struggles with creating/measuring/generating impact because it struggles to conceptualise and structurally anticipate it. We are missing a systemic perspective on impact that is grounded in the fact that different forms of meaningful academic work show very different forms of impact.
The work is supposedly semi-open. The authors ask anyone that reads each chapter, released incrementally on Google Docs, to contribute comments, and then they will work to incorporate these insights back into the final output.
Abhijit Banerjee and Esther Duflo of M.I.T. and Michael Kremer of Harvard have devoted more than 20 years of economic research to developing new ways to study — and help — the world’s poor. On Monday, their experimental approach to alleviating poverty won them the 2019 Nobel Memorial Prize in Economic Sciences. Dr. Duflo, 46, is the youngest economics laureate ever and the second woman to receive the prize in its half-century history.
Amazing news. Esther Duflo has been a research-hero of mine since Cal Newport profiled her as a story of purpose-finding.
In this Wired article, Adam Savage provides a pragmatic description of how he breaks down complex projects using lists.
In my mind, a list is how I describe and understand the mass of a project, its overall size and the weight that it displaces in the world, but the checkbox can also describe the project’s momentum. And momentum is key to finishing anything.
Momentum isn’t just physical, though. It’s mental, and for me it’s also emotional. I gain so much energy from staring at a bunch of colored-in checkboxes on the left side of a list, that I’ve been known to add things I’ve already done to a list, just to have more checkboxes that are dark than are empty. That sense of forward progress keeps me enthusiastically plugging away at rudimentary, monotonous tasks as well as huge projects that seem like they might never end.
I love the physics metaphor here. There’s lots of other insights to be gained by thinking about how work follows physical principles. For instance, projects also have inertia, friction, and surface area:
To return to momentum, though, Adam makes an excellent point: breaking down the work helps keep momentum going even when you put the work down.
That may be the greatest attribute of checkboxes and list making, in fact, because there are going to be easy projects and hard projects. With every project, there are going to be easy days and hard days. Every day, there are going to be problems that seem to solve themselves and problems that kick your ass down the stairs and take your lunch money. Progressing as a maker means always pushing yourself through those momentum-killers. A well-made list can be the wedge you need to get the ball rolling, and checkboxes are the footholds that give you the traction you need to keep pushing that ball, and to build momentum toward the finish.
Another point in the article that’s worth emphasizing:
[I]n a project with any amount of complexity, the early stages won’t look at all like the later stages, and [the manager] wanted to take the pressure off any members of the group who may have thought that quality was the goal in the early stages.
I’ve heard this discussed in the context of critique, or “10% feedback”. When sharing work with others, it’s important to disclose the stage the work is at. Typos should be caught at a project that’s basically ready to publish. They shouldn’t even be discussed when a work is being conceptualized. The focus on early stages should be the concepts themselves, and how they fit within the broader context.
Last thing. This is excellent:
There is a famous Haitian proverb about overcoming obstacles: Beyond mountains, more mountains.
🏔
For serious system mapping work, spending [significant] time studying, thinking about, and mapping your system helps ensure you are addressing root causes rather than instituting quick fixes. In the long term, the time and resources you invest in Systems Practice will pay dividends.
But what if youʼre not quite sold on the Systems Practice methodology yet? What if you havenʼt encountered systems thinking before and just want to dip your toes in? Or what if youʼre an expert or an educator with only a few hours to introduce Systems Practice to a fresh new group of systems thinkers?
I have been in the latter situation, and it’s a challenge. In my experience, people who are wholly new to systems thinking can take a lot of time to acclimate to the mindset. But! If, as a teacher, you can’t illustrate the benefits quickly, it’s easy to disengage.
So, I’m glad this exists. This is a wonderful new resource from Kumu’s Alex Vipond that helps walk you through systems and Kumu’s tools at the same time.
Land that became too toxic for people to farm and live on after the 2011 meltdown at the Fukushima Dai-ichi Nuclear Power Station will soon be dotted with windmills and solar panels.
The Fukushima disaster unfolded as an incredible story of systemic response to new scales of tragedy. Take, for instance, the Skilled Veterans Corps: a group of elderly volunteers who helped with cleanup, knowing that the damaging radiation would have less impact on their lives than it would on younger volunteers.
Now Fukushima’s next chapter is evolving as an example of systemic creative destruction, as new opportunities are unlocked by the collapse of the region’s previous energy strategy.
“In the five years that weʼve had to asses the effect [the Gigafactory has] had on the workforce, on the community, I think there have been these ramifications that we talk about in the episode that nobody was really prepared for,” Damon said in an interview with The Verge. “Like, we knew there was going to be an issue with housing, which other cities are experiencing, too. But thatʼs become super critical.”
Side-effects of growth are not a new problem, but the massive initiatives we’re seeing recently might spark new varieties of old issues.
Through model-based learning, students use diagrams as a way to think about and reason with systems—and to think about how complex systems interact and change.
“Model-based learning” seems like a reframing of classic teaching practices, but it’s nonetheless a powerful reframing. Emphasizing the model—and encourage students to test and iterate their models—is catchy. It’s also deliberately organizational—it requires students to organize and structure their thinking about a given system, often visually.
There is a significant gap in research about Canadian data collection activities on a granular scale. This lack of knowledge regarding data collection practices within Canada hinders the ability of policymakers, civil society organizations, and the private sector to respond appropriately to the challenges and harness unrealized benefits.
So true. This looks like an interesting series from the great team at Brookfield.
Something strange is happening with text messages in the US right now. Overnight, a multitude of people received text messages that appear to have originally been sent on or around Valentine’s Day 2019. These people never received the text messages in the first place; the people who sent the messages had no idea that they had never been received, and they did nothing to attempt to resend them overnight.
It is incredible to think that this could happen on a scale big enough to hit headlines now, but it wasn’t noticeable on Valentine’s Day originally.
That’s one of the problems with our ever-more-complex technologies. We’re accommodating to the bugs. It gets easier and easier to dismiss weird tech events as glitches and move on without worrying. Unreliability is, itself, unreliable.
But there can be major consequences to seemingly innocent bugs:
… one person said they received a message from an ex-boyfriend who had died; another received messages from a best friend who is now dead. “It was a punch in the gut. Honestly I thought I was dreaming and for a second I thought she was still here,” said one person, who goes by KuribHoe on Twitter, who received the message from their best friend who had died. “The last few months haven’t been easy and just when I thought I was getting some type of closure this just ripped open a new hole.”
Herein, then, lies the tyranny of classification: The borders we draw for ourselves create a prison of thought and collaboration, inhibiting movement, connectivity, and learning.
Dominic Hofstetter outlines the many benefits of categorization, too. We have to have both specialization and generalization—categories and loose files. The key is developing processes, protocols, and ways of working that elevate the benefits of both.
The ever-refreshing Paul Jarvis shares some uncommon thoughts on productivity in Jocelyn K. Glei’s Hurry Slowly podcast.
In particular, Paul and Jocelyn discuss the importance of resilience. Citing research and his own experience, Paul points out that resilience is a more important factor in success than many others.
Obviously, though, enabling resilience is not as easy as simply pointing out how important it is. As they discuss, resilience isn’t something innate—which means that it can only be developed through experience. And this is where things get tricky: who gets to have resilience-building experiences?
In my research on innovation skills, I discovered that resilience was one of three key domains that wasn’t an important outcome for our public education systems. This means that resilience training isn’t necessarily a public good. Only if you’re lucky (or privileged) will you have the chance to build up your resilience muscle.
Incredible achievement, but it makes me wonder—what are the .2% of humans doing differently?
These stories of AI achievement are sure to proliferate in the coming years. By focusing on those people who are still able to think around machine learning strategies, we might learn something about how humans and machines can best complement each other.
Ryan J. A. Murphy
ryan@fulcra.design
ryanjamurphy
Canada
Memorial University of Newfoundland
fulcra.designHelping changemakers change their worlds through systemic design and with innovation, leadership, and changemaking education.