Week 2 Calm Before the Storm

Nothing like a cup of python in the morning. Did I say that right? This week I dipped my toes into Computer Vision and Machine Learning. My first program captures a live video feed from my webcam and uses pre-trained Haar feature-based cascade classifiers to detect my face and eyes. The process was surprisingly simple and the results quite satisfying!

This science behind it is fascinating, and I’m glad to be the beneficiary of the hard work done by Paul Viola and Michael Jones in this space. Training my own classifiers for other types of objects is something I’ve earmarked to extend my learning but for now the focus will shift to Machine Learning for week 3. I will be using Google Colab and TensorFlow to complete classification tasks.

Google’s Colab notebook feels like using live playgrounds in Xcode. It provides an interactive notebook where you can combine snippets of executable code and rich text along with plenty of other bells and whistles to create a more immersive learning environment. There is one VERY distinct difference. Colab notebooks, which are really Jupyter notebooks under the hood, are hosted on Google’s cloud services which means my students don’t need to have a fancy, high priced Mac to use notebooks! When I think about the biggest frustration this year with teaching intro to programming in Swift as part of Apple’s Everyone Can Code initiative, it was the challenge of extending the learning experience to my students at home. I used Repl.it to try to mitigate this but without support of any of the fancy mobile app development libraries, like UIKit, programming at home with only a terminal can get pretty stale for today’s cutting edge visual and auditory learners!

What I loved about live playgrounds is the ease in which it lends itself to learning through guided inquiry and worked examples. I could present a worked example of a chatbot and then have my students add to it through specific guided instructions such as adding a line of code here or modifying parameters there. This way of teaching was highlighted in a presentation by Jimmy Newland, current Bellaire HS teacher and PhD candidate at University of Houston. During that presentation Jimmy highlighted the importance of using Cognitive Load Theory (CLT) as the framework for pedagogy.

John Sweller developed CLT in the late 80s and focuses on the idea that we have three types of memory. Sensory, working and long term memory. Every day we are bombarded with sensory information through all of our different senses. Lucky for us we have sensory memory filters that can cut out most of this information but keep tabs on the most “important” items long enough for them to pass into working memory. Items in working memory can then be rehearsed and processed into long-term memory or thrown away. It is in this space that teachers tend to try to operate by rehearsing repeated actions like solving one-step or two-step equations in an algebra class. The only problem is that working memory has a limited capacity so overloading it with activities that don’t directly contribute to learning will generate an undesirable outcome of information loss.

Tools like Google Colab and Xcode playgrounds allow me to break larger problems or complex learning objectives into smaller, more manageable parts through the use of partially completed programs and worked examples. The fact that we can merge coding with multiple sources of visual information like text, images, data graphs, videos etc.. serves to help reduce the number of areas where a student’s attention is divided thus reducing the cognitive load. I’m looking forward to seeing how I can use Colab and CLT to bring my students into the world of Machine Learning. But first, I need to figure out what a Convolutional Neural Network is…stay tuned

This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Week 2 Calm Before the Storm

Leave a Reply

Your email address will not be published. Required fields are marked *