Release your thoughts: publish as you go
I really like the attitude of some people regarding leveraging their work – for example, here: Mark Litwintschik reports on how he installed TensorFlow and executed a simple but instructional and funny “Hello World” model (based on Joel Grus’ hilarious post here). Mark’s article is easy to read, helpful, and sure to find its audience.
Here’s the point: Everyone of us (e.g., me and my team) who tried some hands-on deep learning had to follow basically the same steps. Installation marathon, follow easy example, collect lessons learned. And we all had to document our work somehow for the sake of the team or the (next) project. And we all have been glad to have this job done and move and to more interesting stuff.
I’d suggest that we all (having this experience with research/development “chores”) try to copy Mark’s model: Put 5% more effort into public documentation. Helps others, leaves a foot print with one’s own name on it, and is great training in writing.
Another great example is this blog post on image completion, giving a nice tutorial-style introduction and code for this arXiv paper that appeared just 2 weeks earlier. Thus, Brandon Amos, the blogger, assumedly dived into understanding the paper and just collected his findings and explanations he needed for himself anyhow by starting to author the blog post. Awesome.
Ok, one thing is to leverage the work you have done. Another good argument for publish-as-you-go is that it is our purpose as a university to educate. Take this blog post by OpenAI on generative models, one of the currently hot trends in unsupervised machine learning. It describes work in Progress, summarizes a few trends and own papers, links to open source implementations – and is incredibly helpful for me in preparing new lectures on unsupervised learning. And it also indicates potential use cases of these methods in areas we currently tried to explore, too: Continuous control tasks in real-world scenarious via reinforcement learning.
(OpenAI as an organization is around since 9 month old (December 11, 2015). They are commited to advancing the state of the art in AI in a way that bebefits all of humanity. They do this by pursuing AI/ML projects that are the dream of every scholar in the field to work at (in short, the are “fun” in that weird way scientists define it, including the pain of research), and under the pressure of somehow being economically reasonable (i.e., doing well with the available funding).)
I like to suggest that OpenAIs situation is somewhat comparable to what we as a university of applied sciences are doing in R&D: limited resources, small (but awesome) team, just started with a lot of things. OpenAIs purpose is to benefit society; they do this by pursuing projects in a way that allows for continual publication of incremental findings that are applauded by the AI/ML community as they are really helpful. There’s an article approximately once in 2 weeks on their blog. Nobody would blame the authors of these articles from suffering from the publish-or-perish syndrome (which, admittedly, exists among academics).
I like to suggest to imitate OpenAI in their publishing strategy: There’s nothing wrong in orienting your research towards what is the next publishable result; as long as you work on things useful for others, and as long as you seek to write up things that you imagine others would be glad to read. Sometimes, I figure, two things hinder us from doing this:
- Pride and perfectionism: We don’t want to share what still might make some progress
- Fear of one’s own unworthyness / insignificance: The “who am I to publicly say something on this great topic with all these other experts”-syndrome; a close relative is the “I don’t want to write for the sake of writing; I don’t want to tread the marketing mill / create clickbaits” syndrom, alluding to the “publish-or-perish” argument above
We shall overcome this.