Fulcra
By RYAN J. A. MURPHY
“The ISA is the business model, not education,” says Kim Crayton, a business strategist and founder of CauseAScene , an organization that’s seeking to disrupt the status quo in tech. “You cannot tell me that education is your business model when you have not registered as an institution.” For months, Crayton has been speaking about the problems with coding bootcamps on her podcast, where she’s argued that they target vulnerable communities. “You’re put in these spaces and putting in 110 percent and it’s still not working and you’re told to ‘trust the process,’” she says.
Great reporting on this at The Verge.
Kim Crayton makes an excellent point. The promise of many of these neo-credentials is for students to leapfrog the things everyone fears about the conventional education system. No one is more vulnerable to taking on loads of student debt than those who need it most. Those students are also going to suffer the most if their university or college fails to equip them for a career. Lambda solves both of these problems, making it extremely attractive to poor students.
Sadly, there’s always a catch.
Ben Thompson, in discussion with John Gruber:
It was mindblowing. It was absolutely incredible. The way that you could just do stuff that wasn’t really possible [on a computer]. Again, it was technically possible on a computer, but the user interface and experience was just transformative on the iPad. It was absolutely incredible.
And Jobs knew it. It’s one of my all-time favourites Jobs moments. It’s like fifteen seconds after the demo, and it’s just like… he’s used this. He was involved in the creation of it. They had run through the demo. He knew it. And even then, he was just astonished. He’s just like ‘I can’t believe [this]…’
[…]
It was, to my mind, the culmination of his life’s work. He comes on there, and he’s like, ‘Isn’t it incredible? Now anyone can make music.’
I almost want to transcribe this whole episode. John Gruber and Ben Thompson discuss the potential of the iPad—and its failure to reach it.
Ben uses the term “transformative” deliberately above. They discuss how, before the iPad, no computing experience could adapt to become wholly new tools and environments for whatever the user wanted to do. But the iPad can become a piano or a canvas or a television. In this sense, they argue that the iPad has (or had) the potential for disruptive innovation (RIP Clay Christensen)—but it’s not supposed to be a Mac.
These two think the iPad’s lost the chance to fulfill that potential, mostly because Apple has missed the opportunity to build a vibrant developer ecosystem due to App Store policies. I hope that isn’t the case, though I think we have to look beyond the iPad to fully appreciate what might happen next. The introduction of tablets and transformative computing experiences continues to echo throughout a variety of industries. Graphic designers and illustrators have a new suite of tools to directly interact with their creations in the iPad Pro and the Surface. Similarly, tablet or hybrid devices have transformed schools—schoolchildren now have a “homework” device for all kinds of assignments. It’s true that we still need developers to imagine ever-more revolutionary applications for these devices, but there’s no denying that disruption is taking root.
Either way, the episode is well worth a listen. Enjoy from 15:50 to ~31:22 and 1:26:59 to the end of the show if you want to focus on the iPad discussion.
My bank, fitness and workout apps, and food delivery services I haven’t used in months—those were some of the 30+ apps interacting with Facebook data. Ostensibly this data is used to personalize ads.
As of today, our Off-Facebook Activity tool is available to people on Facebook around the world. Other businesses send us information about your activity on their sites and we use that information to show you ads that are relevant to you. Now you can see a summary of that information and clear it from your account if you want to.
Off-Facebook Activity marks a new level of transparency and control. We’ve been working on this for a while because we had to rebuild some of our systems to make this possible.
Now, thankfully, you can review these connections yourself and clear any history manually. Check out Facebook’s Off-Facebook Activity controls, and happy Data Privacy Day.
If the product is free, you are the product:
An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world’s biggest companies.
In what appears to be something of a purposeful dark pattern, the only thing differentiating ads and search results is a small black-and-white “Ad” icon next to the former.
Hrm. The resulting change seems to work:
Early data collected by Digiday suggests that the changes may already be causing people to click on more ads. […] According to one digital marketing agency, click- through rates have already increased for some search ads on desktop, and mobile click- through rates for some of its clients increased last year from 17 to 18 percent after similar changes to Googleʼs mobile search layout.
Damn. I may start looking for a new search engine.
The most audacious commitment from Microsoft is its push to take carbon out of the atmosphere. The company is putting its faith in nascent technology, and it’s injecting a significant investment into a still controversial climate solution. Proponents of carbon capture, like Friedmann, say that the technology is mature enough to accomplish Microsoft’s aims. It’s just way too expensive right now. Microsoft’s backing — and its $1 billion infusion of cash — could ultimately make the tech cheaper and more appealing to other companies looking for new ways to go green.
Fantastic news. Carbon capture is a key opportunity for decelerating climate change. Hopefully more companies follow suit.
A surprising headline, but the restriction is very specific:
the new export ban is extremely narrow. It applies only to software that uses neural networks (a key component in machine learning) to discover “points of interest” in geospatial imagery; things like houses or vehicles. The ruling, posted by the Bureau of Industry and Security, notes that the restriction only applies to software with a graphical user interface — a feature that makes programs easier for non- technical users to operate.
Still, this is probably a marker of change to come. It’s a little odd to think of software as something that can be “exported”, however. Surely this isn’t a ban on shipping discs across a border. Software is downloaded. So, is this a kind of firewall?
Bohn and his co-founders are confident that, if done right, a proper system for wireless power transmission could shift not just how we think about keeping devices charged and powered on at all times, but also the types of devices we end up putting in our homes and what those devices get used for.
I’d say. If this works, it’ll remove a technical limitation that is pretty built-in to our mental models of how our gadgets work.
Guru is envisioning a world where you can keep all manner of battery-powered gadgets, big and small, all over your home or in every corner of an office, store, or warehouse without having to worry about where they draw power from or how long it lasts on a charge.
Eliminating the “where will I plug it in?” assumption might unlock opportunities across all ways of living and working.
Segwayʼs newest self-balancing vehicle wonʼt require you to stand up. Dubbed the S- Pod, the new egg- shaped two-wheeler from Segway-Ninebot is meant to let people sit while they effortlessly cruise around campuses, theme parks, airports, and maybe even cities.
Hmm. At least with self-balancing, this won’t happen.
Every part of this trial sounds made up. They should just air it in lieu of a Good Fight episode. Elizabeth Lopatto’s writeup is worth worshipping.
Spiro then coined the worst acronym I’ve heard in years, and I edit stories about aerospace so I know from bad acronyms. It is: JDART, for joking, deleted, apologized-for, responsive tweets.
Incredible.
But there’s at least one abstract takeaway that’s interesting to me:
At this point, Wood tried to enter an email exchange into evidence, resulting in a great deal of confusion on Judge Wilson’s part about how email reply chains work. (You read from the bottom.)
[…]
At this point, the “pedo guy” Twitter thread was entered into evidence, and the befuddled court had to be told that the reply chains work the other way on Twitter — the first tweet is at the top, and the last tweet is at the bottom.
Yet another example of the ways in which the world’s accelerating faster than many institutions can keep up.
Scary:
One issue identified on an unnamed carrierʼs implementation could allow any app on your phone to download your RCS configuration file, for example, giving the app your username and password and allowing it to access all your voice calls and text messages. In another case, the six-digit code a carrier uses to verify a userʼs identity was vulnerable to being guessed through brute force by a third-party. These problems were found after researchers analyzed a sample of SIM cards from several different carriers.
RCS is supposed to be a big deal. It’s fascinating how these system-wide policies can be messed up in microsystem implementations.
This report is intentionally broad and robust. We have included a list of adjacent uncertainties, a detailed analysis of 315 tech trends, a collection of weak signals for 2020, and more than four dozen scenarios describing plausible near futures.
Impressive work. I particularly like the CIPHER heuristic they use in analysis signals: contradictions, infections, practices, hacks, extremes, rarities.
Medical crowdsourcing offers hope to patients who suffer from complex health conditions that are difficult to diagnose. Such crowdsourcing platforms empower patients to harness the “wisdom of the crowd” by providing access to a vast pool of diverse medical knowledge.
An interesting application of crowdsourcing. What’s the incentive for healthcare providers to participate, though? I’m not sure doctors can bill for participation in Figure 1. I think the main reason they engage at all is curiosity, and that would likely degrade if, as the authors of the linked study discuss, there was a lot of “noise” from uninteresting posts by patients who aren’t medically literate.
Today, we’re excited to formally launch the final version of OPSI’s AI primer: Hello, World: Artificial Intelligence and its Use in the Public Sector
Another interesting output from the OPSI. It seems usefully pragmatic:
The AI primer is broken up into four chapters that seek to achieve three key aims: (1) Background and technical explainer; (2) overview of the public sector landscape; (3) implications and guidance for governments.
We find that high-, medium-, and low-engagement-state gamers respond differently to motivations, such as feelings of effectance and need for challenge. In the second stage, we use the results from the first stage to develop a matching algorithm that learns (infers) the gamer’s current engagement state “on the fly” and exploits that learning to match the gamer to a round to maximize game-play. Our algorithm increases gamer game-play volume and frequency by 4%–8% conservatively, leading to economically significant revenue gains for the company.
As ever with this kind of mechanism, are we sure we want this to exist..? The potential is no doubt powerful. Imagine interactive TV shows that modulate what they’re presenting based on readings of the viewer… Hrm.
In the next three years, as many as 120 million workers in the world’s 12 largest economies may need to be retrained or reskilled as a result of Artificial Intelligence (AI) and intelligent automation.
cf. Lee Se-Dol.
This is according to the latest IBM Institute for Business Value (IBV) study, titled The Enterprise Guide to Closing the Skills Gap.
Seems like an interesting guide. This metric surprised me:
In 2014, it took three days on average to close a capability gap through training in the enterprise. In 2018, it took 36 days.
I didn’t know this measure existed, but I can see the utility. As knowledge work grows ever more specialized, this time-to-capability can only grow.
The South Korean Go champion Lee Se-dol has retired from professional play, telling Yonhap news agency that his decision was motivated by the ascendancy of AI. “With the debut of AI in Go games, Iʼve realized that Iʼm not at the top even if I become the number one through frantic efforts,” Lee told Yonhap. “Even if I become the number one, there is an entity that cannot be defeated.”
Wow. Perhaps the first real example of “AI took my job?”
The Twttr prototype app gave me another feedback form today. It’s been my habit to complain, at every opportunity, about the trends page you have to engage with whenever you go to the Search tab. I feel a little bad for the designers and developers, because the beta is really all about how conversations on Twitter look and feel. Still, this feedback form was no different. Here’s what I wrote in the “Dislike” section:  I wish I could control the trends page.
It is the absolute worst part of my Twitter experience. It just feels… unhealthy. Like going through a grocery store magazine aisle. Sure, some of the headings are instructive or inspiring, but many are gross, irrelevant, or completely malignant gossip.
The experience is also invasive. Because trends are forced upon you when you intend on searching for something specific, and because they’re algorithmically-tunes to be as attention grabbing as possible, it’s easy to be distracted and forget why you even entered the search pane. I never explicitly consent to learning about celebrity gossip or US politics when I use Twitter. If I tap on some of those topics, it’s not because I want to. It’s because it’s malicious click bait. In turn, it’s corrupt to design an experience that drags the user through it repeatedly.
Sure, this content is viral. But shouldn’t we be inoculating against viruses, not encouraging them to spread?
The over- and misuse of AI is one of my biggest tech pet peeves. It truly is evil to tack the AI term onto the description of most products. It also damages the long-term potential of AI by corrupting what it means—especially for the everyday people who aren’t involved or invested in building these tools, but who will use them (or be used by them).
Arvind Narayanan on Twitter:
Much of what’s being sold as “AI” today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides: https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
Key point #1: AI is an umbrella term for a set of loosely related technologies. Some of those technologies have made genuine, remarkable, and widely-publicized progress recently. But companies exploit public confusion by slapping the “AI” label on whatever they’re selling.
Key point #2: Many dubious applications of AI involve predicting social outcomes: who will succeed at a job, which kids will drop out, etc. We can’t predict the future — that should be common sense. But we seem to have decided to suspend common sense when “AI” is involved.
Key point #3: transparent, manual scoring rules for risk prediction can be a good thing! Traffic violators get points on their licenses and those who accumulate too many points are deemed too risky to drive. In contrast, using “AI” to suspend people’s licenses would be dystopian.
“In the five years that weʼve had to asses the effect [the Gigafactory has] had on the workforce, on the community, I think there have been these ramifications that we talk about in the episode that nobody was really prepared for,” Damon said in an interview with The Verge. “Like, we knew there was going to be an issue with housing, which other cities are experiencing, too. But thatʼs become super critical.”
Side-effects of growth are not a new problem, but the massive initiatives we’re seeing recently might spark new varieties of old issues.
There is a significant gap in research about Canadian data collection activities on a granular scale. This lack of knowledge regarding data collection practices within Canada hinders the ability of policymakers, civil society organizations, and the private sector to respond appropriately to the challenges and harness unrealized benefits.
So true. This looks like an interesting series from the great team at Brookfield.
Something strange is happening with text messages in the US right now. Overnight, a multitude of people received text messages that appear to have originally been sent on or around Valentine’s Day 2019. These people never received the text messages in the first place; the people who sent the messages had no idea that they had never been received, and they did nothing to attempt to resend them overnight.
It is incredible to think that this could happen on a scale big enough to hit headlines now, but it wasn’t noticeable on Valentine’s Day originally.
That’s one of the problems with our ever-more-complex technologies. We’re accommodating to the bugs. It gets easier and easier to dismiss weird tech events as glitches and move on without worrying. Unreliability is, itself, unreliable.
But there can be major consequences to seemingly innocent bugs:
… one person said they received a message from an ex-boyfriend who had died; another received messages from a best friend who is now dead. “It was a punch in the gut. Honestly I thought I was dreaming and for a second I thought she was still here,” said one person, who goes by KuribHoe on Twitter, who received the message from their best friend who had died. “The last few months haven’t been easy and just when I thought I was getting some type of closure this just ripped open a new hole.”
Incredible achievement, but it makes me wonder—what are the .2% of humans doing differently?
These stories of AI achievement are sure to proliferate in the coming years. By focusing on those people who are still able to think around machine learning strategies, we might learn something about how humans and machines can best complement each other.
”[John F. Kennedy’s] challenge disturbed the National Aeronautics and Space Administration’s original plan for a stepped, multi-generational strategy: Wernher von Braun, NASA’s chief of rocketry, had thought the agency would first send men into Earth’s orbit, then build a space station, then fly to the moon, then build a lunar colony. A century hence, perhaps, humans would travel to Mars. Kennedy’s goal was also absurdly ambitious. A few weeks before his speech, NASA had strapped an astronaut into a tiny capsule atop a converted military rocket and shot him into space on a ballistic trajectory, as if he were a circus clown; but no American had orbited the planet. The agency didn’t really know if what the president asked could be done in the time he allowed, but it accepted the call.”
This required the greatest peacetime mobilization in the nation’s history.
â Jason Pontin.Figuring out how to live forever is expensive.
From Re/code’s story on Google’s partnership with Ancestry.com.
Ryan J. A. Murphy
ryan@fulcra.design
ryanjamurphy
Canada
Memorial University of Newfoundland
fulcra.designHelping changemakers change their worlds through systemic design and with innovation, leadership, and changemaking education.