Offtopic, but I have a really difficult time reading articles like this. I don’t know if this reflects a problem with the style or my ability to focus, but I find it really annoying:
> “SANDHOGS,” THEY CALLED THE LABORERS who built the tunnels leading into New York’s Penn Station at the beginning of the last century. Work distorted their humanity, sometimes literally. Resurfacing at the end of each day from their burrows beneath the Hudson and East Rivers, caked in the mud of battle against glacial rock and riprap, many sandhogs succumbed to the bends. Passengers arriving at the modern Penn Station—the luminous Beaux-Arts hangar of old long since razed, its passenger halls squashed underground—might sympathize. Vincent Scully once compared the experience to scuttling into the city like a rat. Zoomorphized, we are joined to the earlier generations.
This goes on for about seven paragraphs before I have any idea what the article about. I understand “setting the scene” but I can’t tell whether or not to care about an article if it meanders about with this flowing exposition before indicating what its central thesis is.
It seems like a popular style in thinkpieces and some areas of journalism. The author makes a semi-relevant title, provacative subtitle, and five - ten paragraphs of “introduction” that throw you right into the thick of a story whose purpose doesn’t seem clear unless you know what the article is about. Rather than capturing my attention with engaging exposition, I find it takes me out of it. But it must work if it’s so uniquitous; presumably their analytics have confirmed this style is engaging.
A thought: Don't let some of the (valid) criticism alone dissuade you from reading this.
IMO the author makes some very valid points about fuzzy products and endpoints in the current AI/data/ML/magic craze. These are under-articulated elsewhere, because, well hey there's a lot of money flowing! Who wants to be a killjoy and not "get it" (just like in 1999 ;)?
Two more specific points: 1. The descriptions of the CEO are eerily familiar to me. This guy is almost an archetype. Reminds me of a person I've worked with in that role who was also associated with a similar-ish company. It really paints the con-game side of all this.
2. A deeper point (and worth the read for me) was the author's thinking about how all this didn't fit existing needs and workflows and then has a chilling thought: "It’s possible that the market for a user-hostile data system that inaccurately predicts the future and turns its human operators into automatons exists after all, and is large." You can make an argument that this kind of thing has already happened in modern customer service and, with greater negative impact, in healthcare. I.e. where the tail of easy metrics and saleable endpoints ends up wagging the dog of quality.
Faced with the impossibility of determining whether a technology is intelligent or not—since we don’t know what intelligence is—Silicon Valley’s funders are left instead to judge the merit of a new idea in AI according to the perceived intelligence of its developers. What did they study? Where did they go to school? These are the questions that matter
This is a perfect summary of the VC situation today. Too much money chasing no-one knows what exactly, but they're sure they'll know it when they see it.
By this rate, looks like we need a "Fucked AI", in the style of "fuckedcompany.com". 
These people were eating VC hype money to build Hagbard's FUCKUP from the Illuminatus! Trilogy. 
Not sure who I feel more sorry for. The smart employees wasting years of their prime chasing some unattainable pipe dream, the VC's who got suckered into pouring their money into some vaporware precog technology, the author trying to disguise a shit river with meandering prose, or my upcoming pay cut when the AI winter sets in.
 First Universal Cybernetic-Kinetic-Ultramicro-Programmer (FUCKUP). FUCKUP predicts trends by collecting and processing information about current developments in politics, economics, the weather, astrology, astronomy, the I-Ching, and technology.
Excellent Sunday morning long read!
Some of PreData's recent "insights":
"China Trade War Fears Still Running High"
"Mall Blaze Sparks Outrage Across Russia"
In short, nothing that couldn't be revealed from the briefest skim of headlines from tomorrow morning's WSJ.com. One can stay better informed leaving a Bloomberg TicToc (which is partially machine generated) tab open all day.
My takeaway is that the world of the Jim Shinns is rapidly approaching extinction. Deals done poolside at country club dinner dances. Name game shmoozing. And serendipitous encounters on private islands. What was considered the predominant pathway to immortality in Fitzgerald's day.
Viable alternatives exist now. And any business model solely differentiated by prestige will be subsumed by free or near-free competition.
> Machine learning, the logic- and rule-based branch of AI supporting Predata....
That's a really embarrassing mistake.
Flawed? You bet. Overwrought? A bit.
But I found this Sunday AM read enjoyable, articulate, and largely on-point (overlooking a few minor scientific errors).
The core themes here are about the hubris of a rich CEO/founder, the zaniness of the current AI "market," and their resultant effect on a particular NYC startup.
This is a season of "Silicon Valley" (HBO) done east-coast, hedge fund, Ivy League style.
Outside of the firms owned/operated by the real clever boys, I wouldn't be surprised if this describes the vast majority of "AI" efforts unfolding at dozens/hundreds/thousands of companies. Everybody is getting on the bandwagon and either don't have any clue or find out that their customers don't even want what they are selling at the end of the day.
I'd be shocked if anyone in the industry hasn't worked for or with a Jim. Spot-on.
This startup's existence and failure and is yet another symptom of how we grossly overestimate what AI can do. If the task isn't simple, repetitive, or clearly defined, unlike the real world, it's probably not going to succeed. Are there any AI startups that are an anti pattern here?
Reading about what Predata was trying to do reminds me of the field of Psychohistory in Asimov's Foundation series.
The point being made is: Technology without vision is dehumanizing. This is widely known and is, for example, the reason good schools make undergrad engineering students take at least a few humanities classes before they leave.
Technology without vision is dehumanizing - it happened with Penn Station, where narrow quantitative and engineering goals displaced the broader human ones and led to the widely-hated station that's there now, which was excavated by people who were called hogs, and which makes passengers feel like rats. The loss is especially acute there, since everybody knows what the old station was like ( https://duckduckgo.com/?q=old+penn+station&kp=-2&iax=images&... ). It was an edifice comparable to the great gares and bahnhöfe of Europe (or to Grand Central which for some reason we decided to keep), a monument to national power, industrial wealth, and the technologies of the time, but also a space that evoked something a little more noble in the human spirit somehow.
The writer is also drawing a parallel with the dehumanizing effect of the particular startup he worked for. The analysts are the hogs, he's the rat, his own perceived loss of creativity (probably a bit exaggerated... aahhh youth) is the dehumanization part, and the absentee CEO is the lack of vision. (If a CEO has one function, it's to provide vision. And in second place, not far behind, is to establish company culture.)
Arguably, placing technical/quantitative goals above more humanistic ones is what an organization like Nazi Germany was all about. But obviously it's way more complicated than that, and I don't intend to address it further.
I would point you toward Dmitri Orlov's concept of a Technosphere. Analogous to the "biosphere" it models human technology as a quasi-intelligent entity that is global in scope.
Excerpt (not much exposition but you'll get the point): https://cluborlov.blogspot.com/2016/02/the-technospheratu-hy...
Everybody here are the ones who most need to hear this message. Some will doubtless resist the criticism of ML/datasci with the fervor of someone whose long-held religious belief is challenged for the first time. But you needed that. Feel free to prove the critiques wrong, by the way... that's kind of the whole point. Prove them wrong with broad projects that actually benefit humanity instead of being a mess of unintended consequences and unimpressive bullshit.
I found the author to be slightly irritating on several occasions, dropping veiled references to Valleywag-style anti Silicon Valley memes, and then I got to the part where he regurgitates that idiotic article about the brain not processing information, and there being something magical about human brains that cannot be simulated .
He is right about his claim of having no right to be called a “director of research", as it seems to me his skills center on cribbing thoughts pulled from other people's thinkpieces. It's clear that he doesn't have a deep background in either neuroscience or engineering and that he was brought to the company from a background in business journalism.
In his condemnation of the state of AI research, there is no mention of AlphaGo, or a description of the teachable pattern recognition techniques that have swept the deep learning scene over the last 6 years.
I'm sorry to be so harsh, but there is a certain tone to this piece, "let's hate all those startup a*holes", "Mark Zuckerberg can't write like F Scott Fitzgerald because his knowledge of liberal arts is too limited, unlike mine" that seems like a snooty class signaler among a certain hipster set.
There is a compelling story in here, but to me the general attitude is just condescending to everyone around him.
It's an ad for their company posed as an opinion piece.