This post represents my didactic concept for the AI course. It was awarded with the 3rd prize in the annual best teaching award competition of my university in 2019. In German.
This is a regularly updated collection of links to valuable reads from non-technical newspapers: opinion pieces though, but thought-provoking. In German (thus the title). Some might be after a paywall afte some time - sorry for that!
Here’s a (regularly updated) collection of links to my favourite MOOCs and other ressources on learning or deepen Artificial Intelligence and in particular Machine Learning / Deep Learning skills:
Dear (prospective) ZHAW student, a very warm welcome to academia! You may just have entered the bachelor’s programme or “Fachstudium”, or you are already a mature engineer, ready to start your master’s or PhD studies - chances are you haven’t been exposed much yet to our setting of learning within an applied research and development context.
Recently, I engaged in a discussion within the Expert Group on Data Ethics on the pros and cons of the term “algorithmic bias”, which describes the fact that certain people groups might be discriminated by an automatic decision making system, and how to prevent this. While every research in this sphere is very important and rightly so at the forefront of current discussions in data science, artificial intelligence and digital ethics (see e.g. here, here or here), I think the term itself might do more harm than good in the public discussion.
What kind of digitalization support do SMEs need? This question was pondered on August 29 when Innosuisse assembled their experts, coaches, staff members and executives under the beautiful roof of CERN’s Globe of Science and Innovation for the Innosuisse Plenary 2018. The topic of innovation was omnipresent, and I had been invited as a representative of the Swiss Alliance for Data-Intensive Services to start the discussion in the breakout session on “digitalization and SMEs” with a keynote. I delivered some pointed hypotheses based on the experience of almost 2 years of Data+Service and numerous CTI/Innosuisse projects. A recording of the presentation is available online, together with the slides. The talk follows the important properties of digital innovations (namely, being interdisciplinary, aiming at automation at scale, and coming at a high speed). These are my conclusions, originally published on the Data+Service blog.
I recently took David Silver’s online class on reinforcement learning (syllabus & slides and video lectures) to get a more solid understanding of his work at DeepMind on AlphaZero (paper and more explanatory blog post) etc. I enjoyed it as a very accessible yet practical introduction to RL. Here are the notes I took during the class.
There is a whole body of literature on the topic of faith and science (and a good example is the work of C.S. Lewis, even if it is a rather unorthodox specimen). Here’s another one, unorthodox in not attempting apologetics, but merely relaying a testimony in the sphere of science.
This is a collection of links to some interesting recent theoretical advances in order to better understand deep learning. It is not complete, not even comprehensive. It is more of a reading list for the upcoming Christmas holidays. More on the topic of deep learning theory can be found in this new MOOC by Stanford professors Hatef Monajemi, David Donoho and Vardan Papyan, in this DL theory class by Ioannis Mitliagkas of U Montreal, and in this overview blogpost by Dmytrii S. on why deep neural nets generalize so well. A good collection of links is also included in this blog post by Carlos Perez.
Coming to the end of a writing-heavy week with 5 paper deadlines, the experience from these publishing processes reinforced the three necessary conditions to me on when to publish (and on what to focus in the paper, besides the story line). When is my work advanced enough that I can publish it? When can I attempt not only writing it down (you should always do it as it clears one’s thoughts), but also making it accessible for the rest of the world (being aware of your responsibility for the body of human knowledge, the valuable time of potential readers, and your future reputation)?
Recently, I got intrigued by so-called Generative Adversarial Nets or GANs (see the original paper, the important improvement to make it practical, an easy to understand implementation and a nice application here). So much so that I derusted my programming and linux command line operation skills, learned docker, and started playing around myself for the first time in several month.
For a young researcher the diversification in the academic landscape can be a huge challenge: am I interested / proficient in machine learning - or pattern recognition? Am I a computer vision guy (because I work mainly on images), or in the field of predictive maintenance (because I also worked on such tasks)? Am I a data miner because I mine data, and can I also add artificial intelligence to the list of my research interests just because I have published on its sub-field of machine learning?
How to do science is a huge topic. I intent to treat it utterly wrong here by not giving it the depth it deserves. Rather, I want to summarize some key points that might be beneficial for students facing the challenge of writing a first scholarly thesis (“wissenschaftliche Arbeit”). As some of these students will have German as their first language, I will translate some important phrases.
Once in a while in a discussion on education or science, the topic of the perceived opposition of theory and practice comes up. It is usually raised by one party with a strong view on either a theory- or an application focus, with the goal of discrediting the other viewpoint: “This is just theory (and thus worthless), because what we need is to make it run”. In a classroom setting, you sometimes hear it from students complaining about too much abstract knowledge and too less examples.
Every once in a while you have to write a draft for some sort of text, as a starting point for somebody to give feedback or improve upon it – maybe a short abstract, a chapter for a thesis or paper, an important email or a proposal. So, what makes up a good draft?
Over the last 15 years, I’ve got to value certain personality tests (yes, the very thing Cambridge Analytica just reported to be able to create out of 10 of your social media likes; but potential ethical drawbacks of big data or the topic of another post).
Public opinion has it that the current success of deep learning is built upon ideas from the 1970s and 1980s, enhanced by the availability of (a) increased computational power and (b) “big” data. Some add that a few minor algorithmic improvements also have been involved (e.g. BatchNorm weight initialization [see also explanation here], ReLU [rectified linear unit] nonlinearities, dropout regularization, or the ADAM optimizer). Building deep neural networks, the legend goes on, then boils down to clever engineering of the knobs and faders of these “black boxes”; and nobody really understands why they work or how they produce results: Pure black magic at worst, empiricism instead of science at best.
I recently came about the notion of “type A” and “type B” data scientists. While the “type A” is basically a trained statistician that has broadened his field towards modern use cases (“data science for people”), the same is true for “type B” (B for “build”, “data science for software”) that has his roots in programming and contributes stronger to code and systems in the backend.
I really like the attitude of some people regarding leveraging their work – for example, here: Mark Litwintschik reports on how he installed TensorFlow and executed a simple but instructional and funny “Hello World” model (based on Joel Grus’ hilarious post here). Mark’s article is easy to read, helpful, and sure to find its audience.
During my spare time in the last year and ongoing, I have been involved in a very intersting project bringing together computer science expertise and charity: The ENJOY project (my term for it).
Interesting enough, the current hype topic (rightly so!) in Machine Learning is Deep Learning – which is largely the current term for ongoing research in Neural Networks.
I just finished reading Donald Knuth’s “Things a computer scientist rarely talks about”; and thought it fits nicely into this series of literature-related posts. The book is a transcription of 6 lectures Don Knuth gave at MIT about “God and Computers”.
I love books; I enjoy reading them, and I recently learend (here) that reading books is even the best way I can learn (that might be different for you, though).
Earlier this year, my colleague Kurt and I have written an article (in german language) that included a chapter on the origin of big amounts of data. In this article, we adopted the view that the term big data ultimately stemmed from the 3Vs (variety, velocity & volume) as attributes of the growing flood of data: the amount, the number of sources and the speed at which new data arrives is very large, i.e., “big”.
The CS229 Machine Learning video lectures are one of the most popular online courses that helped to start the MOOC phenomenon as well as Coursera as a company (Instructor Andrew Ng is Coursera’s co-founder). They are a very good resource to learn about machine learning even for people with already some training in ML.