EMERGE Lab

2024: Deep Learning for Urban Systems

Instructor: Eugene Vinitsky

Course details

Prerequisites

Course Breakdown

This course is a graduate seminar whose intent is to get you to deep-learning-practitioner status as quickly as possible. Since this is a one semester course, we are going to avoid theoretical depth in trade for getting a lot of practical experience training deep learning models. This also means that we are going to skip a lot of standard material in an ML course and focus almost entirely on a small set of topics that I think constitute the standard DL practitioners toolbox for the type of applications that you are likely to be working on. This means relentless pruning; we are not going to cover topics that are interesting and would be necessary to do research in the area of ML and are probably important prerequisites for this course.

As a warning, this course is intended to get you up to speed from almost 0 and as a consequence it is unavoidably a lot of work. I am quite serious about that; there is going to be a lot of homework, quizzes, etc. In particular, the first homework is quite early so as to give you a sense of quite how much work I mean so that you can not take the class if it doesn't fit into the amount of time you have. The hope is that the outcome, of being able to quickly spin up DL models for the modeling problems that show up in your research, is worth it to you so as you read this consider carefully if that's true! Note, the first half of the course is going to be more work than the second as the second half will be more special topics focused.

At the conclusion of the course, I expect to get you to partial fluency in:

Course Project

This is a graduate course intended to help you to potentially incorporate tools from DL into your research. As such, the primary element of the course is a project that you will develop over the duration of the class. There are three elements of the project:

The final project is intended to be a paper written in the style of a NeurIPS 2024 submission. See the associated style-file on that page. The paper should roughly be eight pages, though it can be longer, and should conform to the style of a paper e.g. it should either demonstrate a new result, describe the construction of an engineering project in detail, or be an in-depth investigation of a topic. As an alternative option to new work, I will also allow for the construction of a clear write-up of a paper that would allow a beginner to understand it. See the ICLR blog post track for examples of what I mean. Note that longer papers will not receive additional credit for being so: eight pages is the expectation and the longer limit is merely to give you extra space if you need it. Similarly, if a substantive result can be described in fewer than eight pages that is also fine.

Project Proposal

The project proposal should be a one-page, LaTeX document outlining a concrete research question or engineering task. I'm being insistent about LaTeX here because if you don't know LaTeX yet you probably need to learn it at this point. We are doing it quite early in the course, possibly before you have all of the relevant background, so that I can help you refine your project proposal.

Mid-Semester Checkpoint

At this point I expect you to have a 3 page writeup that outlines your progress so far, any open questions that you have not been able to resolve, and a list of additional work that remains to be done before the completion of the final project.

Grading

This is a graduate course and grading is intended to be fairly lenient; the expectation is that you are excited to learn things and do not need to be cajoled to do so by fear of a bad grade. As such, most of the grade is through participation. If you do the work, you should expect to receive a very good grade. There will be 4 homework assignments through the course and a small number of quizzes that are intended to mimic spaced repetition and help you assess whether you have appropriately understood the materials. The grading will be as follows:

Late material policy

Life happens. As such, you have a total of 4 late days you can distribute across anything. Just write it at the top of the material you are turning in how many late days you are using up. After that, for each day something is late it will lose 10% of the total possible grade. If there are extenuating circumstances that require you to exceed those 4 days, let me know! As I've said, life happens.

Cheating Policy

Cheating is obviously not allowed. Copying answers or code from another student or the internet constitutes cheating and you will be referred to the appropriate NYU procedure for dealing with cheating. Things must be in your own words. However, collaborating with another student is allowed as long as you indicate the student that you collaborated with and your answers and writing are in your own words.

ChatGPT Policy

I love ChatGPT and use it all the time (mostly for little bash scripts and regex). However, one goal of learning is to develop fluency, the process in which you can come up with ideas and use tools and knowledge without reference to an external data store. This is very similar to development of language fluency; imagine if instead of learning a foreign language you tried to look every word up in a dictionary! The same thing happens in research; there are ideas you want to be able to pull out without having to look them up every time. I will try to make clear in the course what those foundational concepts are. A similar thing applies to writing; you want to be able to write quickly and thoughtfully and the only way to get there is practice and repetition.

Using ChatGPT to skip steps delays the development of fluency. In this way, overuse of ChatGPT will possibly get you through the course quicker in the shorter term but will cause you to be a worse researcher in the long term and harm your educational experience.

As such, my rules for ChatGPT are the following:

Note, I acknowledge that there's basically no way for me to check whether you have followed these rules. My hope is that you use it in the ways outlined above because the alternative will harm your development as a researcher, not out of fear of consequence.

Office Hours

I will host a 1 hour office hour twice a week. The time will be updated here once it is scheduled. Please use this time to come talk to me about homeworks, research, whatever. It's your time to use and I am excited to talk to you! You are required to attend at least one office hour during the semester.

Inclusion Statement

The NYU Tandon School of Engineering values an inclusive and equitable environment for all our students. I hope to foster a sense of community in this class and consider it a place where individuals of all backgrounds, beliefs, ethnicities, national origins, gender identities, sexual orientations, religious and political affiliations, and abilities will be treated with respect. It is my intent that all students’ learning needs be addressed, and that the diversity that students bring to this class be viewed as a resource, strength and benefit. If this standard is not being upheld, please feel free to speak with me.

Moses Center Statement of Disability

If you are student with a disability who is requesting accommodations, please contact New York University’s Moses Center for Students with Disabilities at 212-998-4980 or mosescsd@nyu.edu. You must be registered with CSD to receive accommodations. Information about the Moses Center can be found at www.nyu.edu/csd. The Moses Center is located at 726 Broadway on the 2nd floor.

NYU School of Engineering Policies and Procedures on Academic Misconduct

  1. Introduction: The School of Engineering encourages academic excellence in an environment that promotes honesty, integrity, and fairness, and students at the School of Engineering are expected to exhibit those qualities in their academic work. It is through the process of submitting their own work and receiving honest feedback on that work that students may progress academically. Any act of academic dishonesty is seen as an attack upon the School and will not be tolerated. Furthermore, those who breach the School’s rules on academic integrity will be sanctioned under this Policy. Students are responsible for familiarizing themselves with the School’s Policy on Academic Misconduct.
  2. Definition: Academic dishonesty may include misrepresentation, deception, dishonesty, or any act of falsification committed by a student to influence a grade or other academic evaluation. Academic dishonesty also includes intentionally damaging the academic work of others or assisting other students in acts of dishonesty. Common examples of academically dishonest behavior include, but are not limited to, the following:
    1. Cheating: intentionally using or attempting to use unauthorized notes, books, electronic media, or electronic communications in an exam; talking with fellow students or looking at another person’s work during an exam; submitting work prepared in advance for an in-class examination; having someone take an exam for you or taking an exam for someone else; violating other rules governing the administration of examinations.
    2. Fabrication:  including but not limited to, falsifying experimental data and/or citations.
    3. Plagiarism: intentionally or knowingly representing the words or ideas of another as one’s own in any academic exercise; failure to attribute direct quotations, paraphrases, or borrowed facts or information. 
    4. Unauthorized collaboration: working together on work that was meant to be done individually.
    5. Duplicating work: presenting for grading the same work for more than one project or in more than one class, unless express and prior permission has been received from the course instructor(s) or research adviser involved. 
    6. Forgery: altering any academic document, including, but not limited to, academic records, admissions materials, or medical excuses.

If you are experiencing an illness or any other situation that might affect your academic performance in a class, please email Deanna Rayment, Coordinator of Student Advocacy, Compliance and Student Affairs: deanna.rayment@nyu.edu.  Deanna can reach out to your instructors on your behalf when warranted.

Useful references

This course is heavily based on Simon Prince's "Understanding Deep Learning", the Deep Learning course at the University of Amsterdam taught by Yuki Asano, and "Dive into Deep Learning". It also draws a bit from Berkeley's Unsupervised Deep Learning Course, taught by Pieter Abbeel. Links to all of these are below.

Course Schedule - Topics

A quick overview of the logic of the course schedule. Based on preliminary discussions, not everyone entering the course has the background in RL that is necessary to get to the multi-agent learning pieces that form their project. However, we need to get some of the multi-agent learning pieces in so that you actually have some tools and ideas with which to propose your project! As such, we're going to start with a quick overview of some multi-agent topics. We're then going to detour for a few weeks to an overview of RL before returning back to the multi-agent component.

Combined Course Schedule
Date Topics Covered Expected Learning Outcome Weekly Requirements Course materials
Week 1
  • Introduction to deep learning
  • Basic things you need to know (pytorch, tensors, computation graphs, devices)
  • Intro to clusters (SLURM, git)
  • Linear "neural" networks for regression
  • Understand the basic ideas behind pytorch
  • Comfort launching jobs and creating environments on the cluster
  • Overview of linear algebra and autodiff
  • Mini "homework": cluster setup, pytorch operations, gradient descent for linear regression
  • Calibration exercises
  • Chapter 3 of "Understanding Deep Learning"
  • UvA introduction to pytorch
  • Chapter 2 of "Dive into Deep Learning"
Week 2
  • What are deep neural networks doing?
  • A second introduction to gradient descent
  • Understanding of feedforward MLPs
  • Understand how to derive initialization schemes
  • Chp 4 of "Understanding Deep Learning"
  • Activation Functions notebook
Week 3
  • Backpropagation
  • SGD and ADAM
Homework 1: Build your own backprop, derive backprop, implement dead neuron tracker
  • Chapters 6, 7 of "Understanding Deep Learning"
Week 4
  • Initializations
  • Training tricks (Early stopping, Ensembles, Regularizers, Augmentation, EMA, Gradient clipping)
Homework 2: Gradient explosion / starvation, training ensemble networks, trying different regularizers
  • Chapter 9 of "Understanding Deep Learning"
  • Chapter 6 of Dive into Deep Learning
Week 5
  • ConvNets
  • Resnets and RNNs
  • Backprop through time
Homework 3: Train an RNN to predict text and time-series data
  • Chapter 7 of Understanding Deep Learning
  • Chapters 7, 8, 9, and 10 of Dive into Deep Learning
Week 7
  • Transformers
  • Chapter 11 of Dive into Deep Learning
  • Chapter 12 of Understanding Deep Learning
Week 8
  • Normalizing Flows and Variational Auto Encoders
  • Chapters 16 and 17 in "Understanding Deep Learning"
Week 9
  • Self Supervised Learning
Week 10
  • Special topics: Training diffusion models
Week 11
  • Special topics: Making models fast
  • Chapter 13 of Dive into Deep Learning
Week 12
  • Special topics: State space models
Week 13
  • Special topics: What's an LLM