Pages

Thursday, September 7, 2017

Academia and Social Media l WTF

I feel so outraged at the moment that I simply cannot not write about it. So here we go, this piece in Science last week:

http://science.sciencemag.org/content/357/6354/880.2#1504269787069

Summary: Scientists should get / be their own social media influencers to popularize their research.

Great idea -- I already envision future tenure requirements: "The successful candidate has at least 50k Twitter followers and maintains a vast network of social media influencers."

Seriously? Are you shitting me?

That kind of bullshit is one of the reasons that drove me away from an academic career path before I even finished my PhD. I am so disgusted by it. And believe me, many people in academia are.

So what is the problem, you might ask. Very simple: if someone starts this kind of thing it quickly becomes the norm, up to the point that scientists will be evaluated against their ability to achieve social media reach. Don't believe me? Well, we already have seen this happening in science in the last 10+ years.
I am talking about "citation metrics", especially the infamous h-index. In many places it literally became the "gold standard" for evaluating scientists. Might it be for hiring decisions, tenure decisions or simply decisions on whether or not to grant a proposal. People will look at your publication history and judge it solely based on how well it has been received by others. Sounds all very reasonable at first, but turns out to be fatally flawed. Why? It promotes "hype research". If the metric I have to optimise to achieve my academic career goals (i.e. get a permanent position) is reach, I will engage in research that currently resonates with as many people as possible. Let me repeat this very slowly: people - will - engage - in - research - that - is - well - perceived. I don't know about you, but for me this rings a very loud alarm bell. This undermines the most important pillars of academia: intellectual independence and the possibility, even the obligation, to engage in unpopular research topics: to be an independent mind; to explore the unknown, the un-hyped. However, incentive schemes like the current ones, make this harder and harder -- especially for young researchers.
I know a couple young assistant professors who bluntly told me that for the next years they simply have "to play the game", do the research that their peers want and if they get tenured they'll be able to explore more freely. This is not utopia. It already is reality in academia. But even worse, once you engaged in "hype research" for six years, and let's say you were able to build yourself a reputation, do you think people will stop doing what they are doing? The apparent fame, the visibility, the invited talks, the citations -- it's basically the opium of science. And what you end up with is a bunch of attention whores, people who take themselves way too serious.

I know this won't resonate with everyone in academia. And it is good that it does not, as there are still academic communities where all of this is less pronounced. But still, a large portion of academia already went into this direction and in our fury to measure success, many others will follow. Also, given that there are way too less permanent academic positions for all the aspiring PhD students and PostDocs, the question on judging the potential of people is indeed a huge challenge. And there must be some kind of objective measure. I just don't think it's citation metrics, and for sure it's not social media reach.

Sunday, June 18, 2017

"Startup"

The word "Startup" is used in an inflationary way these days. Most people seem to not know that:
A startup is a company designed to grow fast.
http://www.paulgraham.com/growth.html

Wednesday, May 24, 2017

Yoshua Bengio on human level AI




Key take-aways, in case you don't have time to watch it.

We are still very far from human-level AI.

Everyone should be aware of that before jumping into crazy scared horror scenario conversations about machines taking over the world and killing the human race.

There is too much hype about AI these days. Especially in public discussion / public media.

I want to add: which is based on a non-existing understanding within the general public on how the current "AI" methods work and what they are able to (not) do. I even think that "Artificial Intelligence" is a very misleading name for the currently available methods. And big, historically trusted players like IBM marketing their efforts as "cognitive computing" is in my opinion even worse. It's misleading and almost deceiving. After listening to talks (read: sales pitches) about IBM's Watson, I got asked by executives if I think their AI could help them to solve *insert really tough business problem here*, because it seems to be so much smarter and efficient than real people).

Deep networks can avoid the curse of dimensionality for compositional functions.

Which also means, they can only learn to do tasks that could be expressed as such. Which again means that tasks that cannot be decomposed cannot be learned. Is creativity such a task?

Thursday, May 11, 2017

Haiku #2

Mond am Firmament
Nachtigall erklingt am See
Leere, Nichts und Ruh

Wednesday, May 10, 2017

Things unsaid at #mcb17 #rp17



Forty-five minutes went by pretty fast and many things have been left unsaid. So here is a collection of thoughts and responses to yesterday's panel at #mcb17 #rp17 and the event in general.

As I said at the end of the panel, I think we all have to learn to differentiate when we talk about "data". It seems that when "data" is said "user data that is used for advertising purposes" is meant. But optimising ads is only one minor part of what user data can be used for. Let me explain.

Datenkraken

How does the advertising business work? Well, at the end it's solving a matching problem: to get "the right" ads to "the right" user. This is, in principle not bad at all, as both, advertiser and user, have a clear benefit from that: the advertiser makes good use of it's marketing bucks, the user (best case) gets product information that is actually of interest. So how do advertisers know where to allocate their marketing budget? Which user might be most interested in their product? That is where big global companies and ad networks come into play, whose business model is to "get to know" the user, in order to (based on statistical arguments) be able to predict which ad might be most interesting to the user. For those companies having as much data as possible, and especially as diverse ("multi-dimensional") data as possible, is a clear market advantage: the more dimensions and the more complete the data, the better the statistical models to predict what ad might be interesting to the user. Short: user data is an integral part of their business model. I do not want to discuss whether this business model is legitimate or not, just want to state what it is, in order to come to the next point.

Data as means to learn and provide service
Every shop owner, to employ a offline real-life example, observes her shop. She observes how people enter, what they look at, which corners they might never go to and at the end what they buy. Short: she gathers data. I think it's fair to say that no one would expect from the shop owner to not use that data in order to optimise their business: to order more of item A, to exclude item B and to maybe re-arrange the shop in such a way that her customers have a better shopping experience and, of course, increasing her profit. When at the panel it was said that it's "to some extent" understandable that website providers might utilise data to provide better service, but that still(!) at the end it's about making more profits, the only thing to reply here is: "yes, of course". It's always about increasing profits in business. But is this bad? No, because increasing your profits means that you were apparently able to provide a product or a service that consumers want and are willing to pay for. (Clearly, if you are not into markets and business in general, you will not buy this argument.) Short: in this scenario user data is used to optimise another underlying business model. 

Data in a (news) publishing company

And this is also how we see data. We are not an ad network. Our business model is not optimising ads. Our business model is to provide excellent high-quality journalism and to help our readers to be universally informed in a world of ever increasing complexity (other former content provides might see this different and actually shifted to the above outlined ad-network based business model). Data, on how our users use our products, might help us to transform the news media business into a "digitally native business". This is the real issue that (news) media companies face: how does news consumption in a digital-only world look like? In a world where there is an overabundance of information, mostly for free, a world where I could keep myself busy 24-7 with simply reading news. What value can a (former traditional) newspaper-publishing company provide? We strongly believe there is a value in what we do, as understanding and analysing a complex world is not getting less, but more important. And yes, we believe that there are people willing to pay for that service. Still data-enabled innovation is (in my personal opinion) an important key to succeed in re-defining, or let's say: extending, this very old business model into a more digital world. Specifically, I am convinced that we will see more "smart news products", news products that help us get the news and information we need in order to be up-to-date.

Personalisation --> Smart News Products

I just introduced the phrase "smart news products". Why? Sure, I could say "personalisation", but as the word "personalisation" seems to be emotionally charged and in negative connotation with "personalised ads", let's stick to "smart news products". So what in general makes a product "smart"? Mostly it's about being adaptive to the user who uses it. Example: my smart phone is, well, smart. Because it's not a one-size-fits-all product. It has apps, that customise my personal experience. Google's search is smart. It shows me from all those potentially million search results, the ones that are most likely important to me. So when I am in Berlin and I am searching the phrase "bakery", what I am actually looking for (most likely) is "bakery in Berlin" and google deduces this from my location. It's "smart", and it uses data (in this case the implicit information that I am in Berlin) to do so. In a similar way, I imagine "smart news products", news products that are to some extent adaptive to the user, using data. So now I hear you already thinking "filter bubble!". This is a tricky issue and worth a separate blog post. The only thing I want to say here is that "the filter bubble" is not an inevitable effect of using "algorithms" and data to build smart products. It can be avoided, and has to do with the question how I, as data scientist, design the recommendation algorithm for the smart news product. Sure, I could go ahead and say "you always read articles tagged X, I show you everything I have on X", but that's only *one* choice (and frankly a pretty bad one). I could also go ahead and say, "you always read articles tagged X and because I have this data, I will go ahead and show you articles that are not tagged X in order to increase the diversity of information you see". That  is another completely valid (and possible) design choice for the underlying algorithm. Technically it's not as simple as it sounds, but it's doable. Clearly, for a smart news product the reality has to lie somewhere in the middle, or both as two products, one named "more on X", one named "things you usually don't read". Would this be valuable added service to our readers? I don't know, but we will try. 


Tuesday, April 18, 2017

Haiku #1

snow falls in april
waters bursting, birds confused
stranger things with time

Thursday, February 23, 2017