The visited AI summer schools during summer 2021.


Several engaging summer schools that I have a chance to attend during the summer of 2021.


Regularization Methods for ML 2021

The school was led by Lorenzo Rosasco from June 21, 2021, to June 25, 2021. This course’s main goal was to understand various modeling aspects and some of the optimization and statistical principles behind intelligent systems. The main focus was on regularization techniques. Regularization methods allow treating a massive class of diverse approaches in a unified way while providing tools to design new ones. The course started with classical notions of smoothness, shrinkage, and margin. After that, the course began to cover state-of-the-art techniques based on geometry (aka manifold learning), sparsity, and various algorithms for feature selection, structured prediction, and multi-task learning.

Link to the summer school: https://ml.unige.it/education/school/regml2021.

Attendance certificate of Konstantin Burlachenko: Link


The PRAIRIE/MIAI AI 2021

The PRAIRIE/MIAI AI summer school comprises lectures by renowned experts in different areas of artificial intelligence. The third edition of this summer school will be held online from the 5th to July 9, 2021. It will include presentations on several topics, including computer vision, machine learning, natural language processing, robotics, and healthcare. The 2021 summer school will feature lecturers including Lourdes Agapito (UCL), Stéphanie Allassonnière (Univ. Paris), Francis Bach (Inria), Alejandrina Cristià (ENS), Andrew Davison (Imperial), Pascale Fung (HKUST), Arthur Gretton (UCL), Hannah Kerner (Univ. Maryland), Yann LeCun (Facebook / NYU), Catherine Nakalembe (Univ. Maryland), Cordelia Schmid (Inria / Google), Jean-Philippe Vert (Google/ Mines ParisTech).

Link to the summer school: https://project.inria.fr/paiss/.

Attendance certificate of Konstantin Burlachenko: Link


Oxford ML Summer School-2021

The second Oxford machine learning summer school (OxML 2021) aims to provide its participants with best-in-class training on a broad range of advanced topics and developments in machine learning (ML) and deep learning (DL). The school will cover some of the most critical topics in ML/DL. The field is growing interested in Bayesian ML, representation learning, computer vision, natural language processing (NLP), reinforcement learning, causal ML, transfer learning, healthcare, and medicine.

Reading materials

  1. Gaussian Process for Machine Learning http://www.gaussianprocess.org/gpml/chapters/RW.pdf

  2. Taking the Human Out of the Loop: A Review of Bayesian Optimization https://ieeexplore.ieee.org/document/7352306

  3. High-Dimensional Bayesian Optimisation with Variational Autoencoders and Deep Metric Learning https://arxiv.org/abs/2106.03609

  1. Inductive Biases for Deep Learning of Higher-Level Cognition https://arxiv.org/abs/2011.15091
  2. Toward Causal Representation Learning https://ieeexplore.ieee.org/abstract/document/9363924
  3. A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms https://arxiv.org/abs/1901.10912
  4. Learning Neural Causal Models from Unknown Interventions https://arxiv.org/abs/1910.01075
  5. Systematic Evaluation of Causal Discovery in Visual Model-Based Reinforcement Learning https://arxiv.org/abs/2106.06607
  6. Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization https://arxiv.org/abs/2106.06607

Bayesian ML

  1. Variational Inference: A Review for Statisticians https://arxiv.org/abs/1601.00670
  2. Advances in Variational Inference https://arxiv.org/pdf/1711.05597.pdf
  3. Tutorial on Variational Autoencoders https://arxiv.org/pdf/1606.05908.pdf
  4. Auto-Encoding Variational Bayes https://arxiv.org/pdf/1312.6114.pdf Probabilistic causal ML:
  5. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges https://arxiv.org/abs/2104.13478
  6. Geometric foundations of Deep Learning https://towardsdatascience.com/geometric-foundations-of-deep-learning-94cdd45b451d
  7. Causality: Models, Reasoning, and Inference, Cambridge, 2nd Ed., 2009. Chapters 1-3. https://www.amazon.co.uk/Causality-Judea-Pearl/dp/052189560X
  8. Elements of Causal Inference: Foundations and Learning Algorithms, MIT Press, 2017. Chapters 1-6. https://library.oapen.org/bitstream/handle/20.500.12657/26040/11283.pdf
  1. Pattern Recognition and Machine Learning, Introduction to GPs, pages 303 - 320. https://www.microsoft.com/en-us/research/people/cmbishop/prml-book/
  2. Gaussian Processes for Machine Learning (free download) http://www.gaussianprocess.org/gpml/
  1. Attention Is All You Need https://arxiv.org/abs/1706.03762
  2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805
  3. BEHRT: Transformer for Electronic Health Records
  4. RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism https://arxiv.org/abs/1608.05745
  1. Deep neural network models for computational histopathology: A survey https://arxiv.org/abs/1912.12378
  2. Deep learning in histopathology: the path to the clinic https://www.nature.com/articles/s41591-021-01343-4 Computational histopathology papers covered in Lea Goetz’s lecture:
  3. Clinical-grade computational pathology using weakly supervised deep learning on the whole slide images. https://www.nature.com/articles/s41591-019-0508-1
  4. Closing the translation gap: AI applications in digital pathology. https://doi.org/10.1016/j.bbcan.2020.188452
  5. Development and validation of a deep learning algorithm for improving Gleason scoring of prostate cancer https://www.nature.com/articles/s41746-019-0112-2
  6. Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images https://www.sciencedirect.com/science/article/abs/pii/S1361841519301045?via%3Dihub
  7. Pathology GAN https://2020.midl.io/papers/quiros20.html
  8. Self-Supervision Closes the Gap Between Weak and Strong Supervision in Histology https://arxiv.org/abs/2012.03583
  9. Capturing Cellular Topology in Multi-Gigapixel Pathology Images https://openaccess.thecvf.com/content_CVPRW_2020/papers/w16/Lu_Capturing_Cellular_Topology_in_Multi-Gigapixel_Pathology_Images_CVPRW_2020_paper.pdf
  10. Data-efficient and weakly supervised computational pathology on whole-slide images https://github.com/mahmoodlab/CLAM

Multiple-Instance Learning:

  1. Multiple-Instance Learning for Medical Image and Video Analysis https://pubmed.ncbi.nlm.nih.gov/28092576
  2. Attention is all you need https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
  3. GANs: Generative Adversarial Networks https://arxiv.org/abs/1406.2661
  4. Segmentation: U-Net: Convolutional Networks for Biomedical Image Segmentation https://arxiv.org/abs/1505.04597
  5. Contrastive Learning: A Simple Framework for Contrastive Learning of Visual Representations https://arxiv.org/abs/2002.05709

Causal ML, Interpretability in ML:

  1. AutoML: powering the new human-machine learning ecosystem https://www.vanderschaar-lab.com/automl-powering-the-new-human-machine-learningecosystem/
  2. From black boxes to white boxes https://www.vanderschaar-lab.com/from-black-boxes-to-white-boxes/
  3. From real-world patient data to individualized treatment effects using machine learning: Current and future methods to address underlying challenges https://ascpt.onlinelibrary.wiley.com/doi/abs/10.1002/cpt.1907
  4. Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes http://papers.nips.cc/paper/6934-bayesian-inference-of-individualized-treatment-effects-usingmulti-task-gaussian-processes
  5. Validating Causal Inference Models via Influence Functions https://www.vanderschaar-lab.com/papers/Validating_CI_models_via_IF_manuscript.pdf
  6. Estimating Counterfactual Treatment Outcomes over Time through Adversarially Balanced Representations https://openreview.net/pdf?id=BJg866NFvB
  1. “Our World in Data: Emissions by Sector” - context on the sources of GHG emissions https://ourworldindata.org/emissions-by-sector
  2. “Oil in the Cloud” (recommended) - a worthwhile read on some of the negative uses of ML. https://www.greenpeace.org/usa/reports/oil-in-the-cloud/
  3. “Tackling Climate Change with Machine Learning” (optional) - David Rolnick’s talk will be centered on this paper https://arxiv.org/abs/1906.05433

Link to the summer school: https://www.oxfordml.school/

Attendance certificate of Konstantin Burlachenko: Link


Conference and summer school “Optimization without Borders.”

HSE University, MIPT, and Sirius University invite you to join the Conference ‘Optimization without Borders’. The Conference is dedicated to the 65th anniversary of Prof. Yurii Nesterov and the 50th anniversary of Prof. Vladimir Protasov.

Link to the summer school: https://cs.hse.ru/hdilab/opti/

The conference program: https://cs.hse.ru/mirror/pubs/share/485504552


Written on October 24, 2021