Second Semester at KAUST as CEMSE/CS Ph.D.


This is a post about my second semester (Spring, 2021) at KAUST CEMSE/CS.

At KAUST, classes are not big, and professors are world-level scientists.

This allows direct dialogue with professors on the subject to obtain deep insights and inspiration for the paper.


Classes that I took (CS332, CS323, CS398):

As a subpart of my academic load, I took classes at KAUST during my second semester, which I can use for Ph.D. qualification requirements, and from another side, that are important for my academic work.

In this post, I would like to give some small insides about all of them.

Description of my academic load during the first semester that may be in the area of interest of people who apply to the CEMSE/CS Ph.D. division of KAUST is described in the previous note: First-Semester-at-KAUST-post/

Federated Learning, CS 332 with prof. Peter Richtarik

The course was taught by a professor in the CEMSE division prof. Peter Richtarik and the course are almost paper-based.

During the class, students jointly with the professor prepared high-quality lecture notes in various subjects, including information theory, local optimization methods, second-order optimization methods, software and systems for FL, cryptography, internet of things, background materials to go deep into strong mathematical proves of various statements.

Another part of the class was targeted to create publishable work, which in my case happens. The class provides a hot start for the work.

If you’re a student and want to go deep into the true multidisciplinary direction of AI, this FL can be a choice. Especially the FL fields due to its working regime (across billions of devices in cross-device setting) requires understanding what we are doing. Heuristics method in that field can be used as a temporary solution, but in the long term, everything should have a strict mathematical form. We probably will still have tunable parameters during modeling and connection with reality, but at least these parameters should have a decoupled form.

The course allows enough freedom to make a selection in various topics. The course has opened my eyes that preparing good lecture notes in difficult subjects in compact form is challenging.

Prof. Peter Richtarik is open to practical and theoretical work in Federated Learning. He has always had time to discuss and help and initiate projects for students in his class.

What is important during a course you will present lectures and make a good presentation in live mode is also not an easy skill, but you can build this skill if you apply it into this course in future.

Deep Learning for Visual Computing, CS323 with prof. Bernard Ghanem

Even though I faced Deep Learning(DL) methods during my career at NVIDIA and during my academic passing of several classes at Stanford University, this class was very useful.

The course was taught by prof. Bernard Ghannem from Image and Video Understanding Lab. At the moment of 2021, prof. Bernard Ghanem is also KAUST AI Initiative Leader and Deputy Director of AI Initiative.

This was a very pretty intensive course in terms of amount materials and homework assignments. Prof. Bernard Ghanem encourages questions and discussion during a lecture. He accurately highlighted and helped filter the interesting results from temporal heuristics that the community can not solve exactly. This course is beneficial for people who want to catch improvements in state-of-the-art image and video DL methods.

The course is a mixture of classical computer vision methods covering works from 2012 like AlexNet up to recent models, including Transformers, PointNet++,ResNeSt, GANs that can be observed as deterministic two parties game, VAE in which output of Encoder in a probabilistic way is plugged into Decoder. Nowadays, VAE is compatible with GANS, e.g., the NVAE paper.

The excellent and unusual thing compared to the education model at Stanford where I took classes CS229, CS230 with prof. A.Ng is in the following.

Students have to read original papers and answer in detail and prove that they understand the scope and limitation of approaches from original papers in reading assignments. The course has opened my eyes that in Applied Machine Learning, it’s far easier in 90% of cases to read original papers. Of course, it’s not always a case of considering all Applied Math in general, but in that case, it is. At Stanford classes, students have been encouraged to read original papers, but not has been strictly required.

Another special thank if to a group of fantastic teaching assistants. The homework started from the first principles with using PyTorch as computation backend and simple models. Finally, somewhere in the middle, the course considers pretty complicated schemas for image, video, point cloud classification. I recommend this course.

The homework assignments were in Python using PyTorch.

There would be no problem if you did not PyTorch before because Homeworks are structured pretty clearly, and in fact, one of their goals is to build your ability to use PyTorch. During a course, I have read original documentation of PyTorch, and it was written nice and clear. I have organized links to references for documentation (my mental picture) for the course with the following table.

If you have never used Python as a programming language, I recommend the following materials:

CS398 Graduate Seminar organized by prof. Ivan Viola

Graduate Seminars is a non-credit weekly seminar in various fields connected to the CEMSE division where speakers are professors from KAUST or another university, which share their research.


Written on October 20, 2021