Workshop 06/30/2021
Join Us
In this third workshop in linear algebra, we will investigate the link between Principal Component Analysis and the Singular Value Decomposition.
Along the way, we are introduced to several linear algebra concepts including linear regression, eigenvalues and eigenvectors and conditioning of a system. We will use shared python scripts and several examples to demonstrate
the ideas discussed.
T
his workshop builds on the previous 2 workshops in linear algebra (PLEASE INCLUDE THE LINKS), and we will assume that the linear algebra concepts introduced in those workshops are familiar to the audience. They include:
vector algebra (including inner products, angle between vectors), matrix-vector multiplications, matrix-matrix multiplications, matrix-vectors solves, singularity,
and singular values.
#
In this workshop, we engage beginner and intermediate participants interested in getting started with Deep Learning and the Internet of Things (IoT). We’ll do hands-on exercises where you’ll use a webcam and a neural network to recognize images, aggregate data, and run real-time IoT analytics. Our goal is to get you excited about IoT and Deep Learning, and to set you up for success with various types of projects for work, school, and beyond.
[see Other Attachments field for longer description]
#
Graph-based algorithms are essential for everything from tracking relationships in social networks to finding the shortest driving distance on Google Maps. In this workshop we will explore some of the most useful graph algorithms, from both the breadth-first and depth-first methods for searching graphs, to Kruskal’s algorithm for finding a minimum spanning tree of a weighted graph, to approximation methods for solving the traveling salesman problem. We will use hands-on examples in python to explore the computational complexity and accuracy of these algorithms, and discuss their broader applications.
#
Natural language processing has direct real-world applications, from speech recognition to automatic text generation, from lexical semantics understanding to question answering. In just a decade, neural machine learning models became widespread, largely abandoning the statistical methods due to its requirement of elaborate feature engineering. Popular techniques include use of word-embeddings to capture semantic properties of words. In this workshop, we take you through the ever-changing journey of neural models while addressing their boons and banes.
The workshop will address concepts of word-embedding, frequency-based and prediction-based embedding, positional embedding, multi-headed attention and application of the same in unsupervised context.
Prerequisites – Basics of Neural Network, RNN/LSTM
https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
Event Program
June 30, 2021
8:00 AM - 8:45 AM
Linear Algebra, Part I: What Would We be Without It
Margot Gerritsen, WiDS Worldwide, Stanford University
8:45 AM - 9:30 AM
9:30 AM - 10:15 AM
10:15 AM - 11:00 AM
Natural Language Processing
Riyanka Bhowal
*All times are UTC -8