-
Notifications
You must be signed in to change notification settings - Fork 52
Workshop
What: building Arduino projects using machine learning (applied to real-time sensor data)
When: August 13th, 2016, 10 am to 6 pm
Where: Jacobs Institute, UC Berkeley (map)
Who: David A. Mellis, Nick Gillian, Ben Zhang
Interested? Apply by completing this survey. Note that this workshop is part of a research project at UC Berkeley conducted by David Mellis with supervision from Bjoern Hartmann.
Have you ever tried to recognize how someone was moving using an accelerometer? Or detect sounds in the environment using a microphone? Did you get stuck when it came time to write the code to do this pattern recognition? If so, you might want to learn about machine learning, a set of techniques for recognizing patterns in data.
In this workshop, we'll teach you how to apply machine learning to real-time sensor data, e.g. for use in interactive projects using Arduino and other electronics / prototyping platforms. Applications include:
- recognizing gestures using an accelerometer
- recognizing how people are touching an object using capacitive sensing
- identifying people from their voices using a microphone
- detecting objects by identifying their colors
In the workshop, we'll use the ESP system and the Gesture Recognition Toolkit (GRT) for machine learning. The GRT supplies a wide range of machine learning algorithms for real-time sensor data and ESP provides a user interface for working with and customizing those algorithms and their associated data. We'll have a range of Arduino boards and sensors on hand for participants to experiment with.
10:00 am - 10:15 am: Arrival, opening surveys.
10:15 am - 10:45 am: Opening discussion.
10:45 am - 11:45 am: Introduction / overview to ESP. ESP gesture recognition example (and tutorial). Intro to machine learning terminology.
11:45 am - 12:30 pm: Overview of other examples (individually)
12:30 pm - 1:30 pm: Lunch
1:30 pm - 4:30 pm: Projects
4:30 pm - 5:00 pm: Project presentations
5:00 pm - 6:00 pm: Closing surveys and discussion.
Machine learning refers to a broad range of techniques for teaching software using example data, rather than by explicitly programming fixed rules. Machine learning has become popular for a wide range of tasks, many of which involve massive data sets of the kind collected by Google, Amazon, Baidu, and others.
This workshop focuses on applying machine learning to real-time sensor data (e.g. from an Arduino or a computer's microphone). We've created machine learning pipelines for particular application domains, like gesture recognition, speaker identifications, and the others listed above. At the core of these pipelines is a machine learning algorithm, or classifier, that recognizes patterns in data based on their similar to pre-collected, labeled examples -- training data. The ESP system allows you to iteratively train the machine learning pipeline using training data collected from a live stream of sensor readings. This data can come from sensors connected to an Arduino or other microcontroller, or the microphone in your laptop. You can use this data to train the system to, say, recognize gestures of your choice or to identify particular people's voices.
Once trained, the system will make predictions about the similarity of incoming live sensor data to the recorded examples. These predictions can be streamed to a variety of outputs: back to the Arduino board or over a TCP connection to a program written with Processing or other software. This allows the machine learning pipeline to be a part of your larger interactive project.