Mind Blowing Lessons!

     This week has been short, time-wise; which made it fast and furious! Monday, I continue working on programming assignments and the required readings. I am still relatively new to programming. Although I wasn’t formally trained in CS (earning a degree in CS)  I love everything I have discovered and learned thus far about programming. With that said, I was not aware of “Kaggle”. Kaggle is the world’s largest data science community with powerful tools and research resources to help programmers achieve their data science goals.  

Kaggle.com Home page

     I started with a tutorial to help me understand Kaggle’s environment. The tutorial showed me how to import significant bytes of data. The tutorial tested my ability to read a data file and understand specific statistics about data. I applied techniques to filter data that helped me build a Data model. The last challenge I had to accomplish was to construct my very own data model! Data mining is new to me; I can undoubtedly see the benefits, applications, and the power behind learning this new skill.

     I can see using this tool in my CS classrooms. The Jupyter notebook interface encompasses everything you would need to get started working with data. The notebook environment that contained the code was very intuitive and easy to use. By the end of the tutorial, I created my own Data model. Developing the skill set to import, disaggregate, manipulate, and that use “data” to make plausible predictions is invaluable!

     Before I became an educator, I was a Yield Manager for my last employer. I distinctly remember working with warehoused data year over year to forecast current and future pricing models. I used Mircosoft Excel pivot tables and filtering software; which was not that flexible.  Machine Learning would have made a significant difference in building, disaggregating, and using that data for my forecasting models!   

Each week we have guest speakers; whether they are in graduate school or working in the Computer Science industry. This week was no exception. Yongyi Zhao is a graduate student at Rice University. He is currently working on a device that will someday (hopefully) be able to read “Brain Waves”.  His research and testing are fascinating. He is working on taking a human “thought” and re-creating it as a digital image. The applications for such a device are endless! From hospitals to education to individuals; the research and work that is taking place is astounding! Below is a prototype wearable device that could be worn unobtrusively (still in development). Image 1

Neuroimaging Prototype Device – 1

 

MOANA Project

The MOANA project (Magnetic, Optical and Acoustical Neural Access) is a device that would be able to read and write brian waves and turning that information into text or images. This is “Mind Blowing” research! Some of the students at Rice are working on projects that could someday “change” the World!

The SWITCH PATHS-UP program is the fifth program I signed up for and was accepted. Each encounter raises the bar higher and higher. I as well as my students and co-workers benefited from all of the previous programs I attended.

This program by far is the most applicable to my current assignment. My students, current and future are going to be amazed at this new level of excitement that these new tools will bring to my Computer Science classroom. [Computer Vision (with Open-CV), Artificial Intelligence, Machine Learning, and all its capabilities, working with significant amounts of data through Kaggle just to name a few].

Posted in Uncategorized | 1 Comment

Week 3

This is the week where it got real for me. We started off the week with a meeting with Allen where we went over our end of summer presentations covering our experience in the program.

Our second day we had a presentation by Daniels friend, Arun. He apparently works for Deepmind and told us all about his work with deep neural networks and how they use them to play video games at a high level. The most fascinating part, at least for me, was his story on how the network learned to play GO and beat professional player Lee Sedol. It also contributed to Lee’s decision to retire.

That day was jam packed with knowledge because a few hours later we heard from Yongyi Zhae a graduate student who is working on deciphering images from the brain using machine learning. The idea being you show a human an image, map what parts of their brain activates, and use that to train the network. EVERYTHING about this blew my mind and even now it seems like something out of a sci-fi movie. So in the simplest terms that make sense to me is you shoot light into the brain and try to decipher whatever light gets refracted. Obviously the signal is going to be a mess because of the skull and other biological matter, so then they use machine learning to clean the signal up. Long after this program is over I’m going to try to keep a lookout of this research. The applications are incredible!

Finally on to our homework. I had taken a pause on the coursera course cause I felt like I needed soem more background knowledge before moving forward. The resources Mary provided are incredible. The Kaggle course did an amazing job at showing you the material, explaining to you why things were done as show, then tested you on it. After that I did the Tensorflow assignment and it really reinforced the material. I feel like I understand it more! The week is technically “over” but I’m going to spend the week trying to complete a kaggle competition and doing the snapchat filter project. I want to absorb as much knowledge as possible from this experience.

Posted in Uncategorized | Leave a comment

Mind blown!

DeepMind

Deep Mind’s Mission

We started the week with a big bang! Daniel Angel organized a meeting with Arun Ahuja,. Mr. Ahuja works at DeepMind, which is UK artificial intelligence company founded in September 2010, and acquired by Google in 2014. We learned that their mission is to solve intelligence and use ‘it’ to solve everything else. Mr. Ahuja, I am paraphrasing here, describes intelligence as the ability to use prior experience and acquired knowledge to solve a new problem at hand- that isn’t ‘learned’. In the first two weeks of the SWITCH program, we read a lot about different types of machine learning, DeepMind focuses on one specific type: Reinforcement Learning, which in layman’s term can be described as ‘trial and error’.

The company uses various games as challenges to see if the system is able to solve these games as a learning experience. No prior information is input, the system plays the game, for example, brick breaker and realizes that certain actions will lead to the score increasing, while others will not. It uses this experience of trial and error to refine and complete actions that will lead to an increase in score and therefore the system wins the game. Mr. Ahuja mentioned that AlphaGo (computer program designed by Deep Mind Technologies) was able to beat the best human player at an ancient Chinese game called Go. Another cool fact- the app recommendations on Google App Store were developed by Deep Mind as well.

Diversity in a Classroom

Guest Speaker Info

Monday afternoon, we were able to discuss issues that would usually be hard to discuss in a formal setting, especially classroom. That being said, it is vital that we have these necessary but uncomfortable conversations about diversity and implicit bias. Dr. Antoine-Morse informed us about implicit bias and its presence in various social settings, as well as a classroom-setting example from Allen Antoine. Not only did we identify bias, but we got to discuss how we had all seen it in our very own lives, with some hard hitting scenarios from my fellow RETs. Dr. Antoine- Morse dove into identity as well, and its seven aspects: religion, age, race, gender, nationality, ethnicity and class. At the end of it, we discussed that it opened our eyes to our own biases in our classroom as Mr. Franklin stated. However, Mr. Medrano felt that we just dipped our toes in the pool of a very intricate topic and we need more time to discuss and think about diversity in the classroom- I’m inclined to agree with that notion.

Guess who’s back? Back again? Yong is back!

Yongyi Zhao, my last summer mentor, returned to tell us about his mind-blowing project called Magnetic, Optical and Acoustical Neural Access (MOANA)- basically it is non-invasive neuro-imaging. His research project had moved to phase two, where as last year he didn’t have results, he had collected some data. I was excited to see this! MOANA on the deep end would like to bring telepathy to the real world.

Telepathy and its application

Goals of MOANA

Mr. Zhao is concentrating on the “read” aspect of this research project, therefore he is interested in developing a non-invasive, wearable, functional neuroimaging device. I’ll try and explain it in the best terms I can because it was quite a difficult concept for me to wrap my head around. Imaging is a mapping of photons from scene to sensor, however, there is a lot of scattering of this light before it is received by the sensor-due to the presence of skull. Hence, it is difficult to see a clear image- this is where the DOT (Diffuse optical tomography) comes in. It models how light behaves in scattering media, and reverses this process to determine properties of the brain. Mr. Zhao noted that most scattered happened earlier in the received signal- as the photons were unable to reach their final destination, but rather just bounced from other surfaces and returned quicker than anticipated. The dismissal of these earlier signals leads to better signal-to-background ratio.

Reconstructed image

Above is an image that was shown in the presentation. The first picture of the clear R is the ground truth, and the second is the reconstruction- it isn’t very clear, but you can recognize the general shape of the R. The future applications of this research project are endless- perhaps we aren’t very far away from having telepathy after all.

Daniel Angel’s mind blown face!

It isn’t very clear in this picture- but Daniel Angel’s mind was blown by this whole presentation. It was definitely eye opening to say the least.

Posted in Uncategorized | 2 Comments

Week 2 Calm Before the Storm

Nothing like a cup of python in the morning. Did I say that right? This week I dipped my toes into Computer Vision and Machine Learning. My first program captures a live video feed from my webcam and uses pre-trained Haar feature-based cascade classifiers to detect my face and eyes. The process was surprisingly simple and the results quite satisfying!

Continue reading

Posted in Uncategorized | 2 Comments

“Hello Open-CV World!”

     Week two is in the bag! Monday, the learning curve and intensity level increased. We engaged in a conversation about how we could tie our proposed lesson plan to our “main” objective. At this point, research is in full swing. I am constantly brainstorming a possible lesson to create. I have a great idea. I just have to refine it for the classroom. 

      I immersed myself deeper into computer vision. I began to understand the programming behind it and the power of Computer Vision. As a task, I demonstrated how to output an image from my computer’s webcam using OpenCV; I displayed images with a bounding box depicting the face and eyes in an image. I also made a copy of a particular image from a file using OpenCV. 

Facial Recognition -Bounding Box, Erika ©

Working with Python to this degree has increased my skill level. Daily, I am reimaging how I could take the lessons learned and incorporating them into my CS curriculum.   

 Tuesday, I read an article explaining Mask R-CNN. Simply put, it can detect each instance in an image while separating them from others in the same image extremely fast. This technique would be very beneficial when searching for specific criteria within hundreds or thousands of images. 

 Our guest speaker for this week was Amil. Amil is a graduate student at Rice University. He is currently working in the field of Face and Gaze detection within children. The system he used is called FLASH – (Family Level Assessment of Screen use at Home).  An Initial RAW video feed was played for the child. The system has three (3) steps:

  1. Face Detection
  2. Face Recognition
  3. Gaze Detection 

Steps depicting the Flash System

     The system can accurately estimate how much does the “target” child watches the video feed. This study could have far-reaching applications for schools and teachers. Teachers are using computers and other devices with screens more and more due to online learning. This system could help teachers know whether or not a student is paying attention to the lessons on their computer screen. 

      Wednesday, hump day! Today we had a great discussion about topics and ideas for our “Teach Engineering” lesson plan. We are gathering more and more insight as to what we could expect when we dive into creating our lesson. That led us to a talk and Q & A session with Jimmy Newland. 

     Jimmy Newland is a past participant of the PATHS-UP program. He took us through the steps necessary to take our lesson plan idea from conception to reality. He walked us through his published lesson. His lesson, “Visualize your Heartbeat,” was very creative, intuitive, and engaging. 

 

Cognitive Lead Theory

Rice University’s RET Program

   Thursday, we talked with Mary Jim. We discussed the article Computer Vision and Machine Learning.Appearance_Based Gaze Estimation in the Wild” by Xucon Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. The article discussed the different methods used to monitor test subjects while looking at objects on their computer screen. The test subject were in varying lighting conditions; various times of the day; various types of seating conditions. The viewed over 213,000 images before publishing their results.  

Posted in Uncategorized | Leave a comment

Open Sesame (CV)

Hello access!

This week we requested that Mary (our intern) upload our whole curriculum up, so we could work on our own pace. You’ll notice that each of the RETs will be doing their own thing. We started the week with our reading- and we posted up some questions. The reading highlighted that the MPII gaze was significantly better than the other previous attempts at gaze detection. I followed Eureka tutorials to start with OpenCV, but I decided to change from PyCharm to Sublime- due to how slow PyCharm was on my school laptop.

Minor Setback and Over the Hump

Despite changing from PyCharm to Sublime, I caught up and I was able to get face and eye detection through haar cascades. I wanted to check if my glasses were the reason why one of my eye seemed to be longer than the other- however, that did not change. I wrote down the steps and initial code for this task. I’ll be using Cognitive Load Theory (CLT) in class to assist my advanced students in completing this assignment. Thanks to @JimmyNewland for introducing this theory to all the RETs.

Face and eye detection with glasses

Face and eye detection without glasses

F.L.A.S.H. (Family Level Assessment of Screen use in the Home)

Anil Kumar Vadathya introduced us to his research project called FLASH. The goal of this research is to monitor how long children spend in front of screens- this is a collaboration between various universities and researchers. We followed Anil’s footsteps in tracing how he collects his data. First, limited families are invited to participate in this research due to monetary reasons, the raw video streams are collected. Then, the first step is Face Detection, this is where the computer is able to distinguish various faces on the screen. Next, Face Recognition- this is where the computer hones in on the individual child that is being monitored. Last, the computer uses Gaze Detection to determine whether this particular child is viewing the screen, then the total amount of screen time is recorded.

Motivation behind FLASH- a real time system to automatically, accurately and non-intrusively monitor children’s screen use.

Anil stated that they have 95% average accuracy in detecting faces, some obstacles being that the environment is not appropriately lit or the face is partially visible. If he had an endless amount of money and time, Anil would like to be able to increase the amount of test subjects to increase the reliability of the research conducted.

Guess who’s back? Back again- Jimmy’s back! 

Jimmy’s Presentation 1

Jimmy Newland, my mentor from last summer presented this week. He is conducting research for his PhD and his presentation introduced us to the idea of how powerful Research Experience for Teachers can be for improve teacher’s knowledge and skills. A bit of background about Jimmy, he is a Physics, Astronomy and Computer Science teacher at Bellaire High School. Also, he highlighted how valuable Cognitive Load Theory(CLT) can be in the classroom, I will definitely be using that! The presentation went over Jimmy’s research experience as an RET, his published lesson plan on the Teach Engineering website and his own PhD research scope. I thoroughly enjoyed his presentation and his energy. I hope to see him again in action as a professor.

CLT example used by Yong last summer in my previous RET experience.

Jimmy’s research as an RET.

To conclude this week, I started early on the Kaggle’s course and I hope to complete it by today. I’m excited to work with Mary to start on gaze detection using Machine Learning. I’m also very thankful that @Daniel Angel was able to invite a friend that will talk to us about Google’s Deep Mind- it will be a once in a lifetime experience. Looking forward to next week.

Posted in Uncategorized | 4 Comments

SWITCH RET Week 2 ✌🏽

The goal for our research this week was to be able to display a webcam stream using OpenCV.

To complete this task, I decided to follow the OpenCV tutorials 

The tutorial walks you step-by-step through every task, giving you the source code.  I’m still working my way through it, nevertheless, I’ve uploaded all the source code in this Github.

Webstreaming with OpenCV was surprisingly easy. I used a grayscale instead of the color. Later in the week in our mentor meeting, Mary shared with us how the human eye gets most of its information from grayscales anyway. The edges, mainly, so folks use grayscales to save memory and it doesn’t really impact the functionality.

First Webstream

Since getting the webstream up and running left me with some additional research time, I decided to dive into all the other tutorials in OpenCV and learn more about it.

Here are some of the cool things I tried:

I tracked objects of different colors, here is a blue hand sanitizer holder.

Object Tracking

I also tried some image blurring, but wasn’t very successful. I tried changing the format of the picture, even Gaussian blurring and couldn’t get it to work for me.

Image Blurring

Something that I also couldn’t get to work was saving and playing the webstream into a video file. I’m working on the latest Catalina OS and spent way too much time looking for the right video codecs. The furthest I got was to save but couldn’t get it to play back.

Can’t Save and Play attempt

Our week as always was mostly filled up with meetings, but we got to learn about FLASH face detection from grad student Anil Kumar Vadathya who is conducting research on gazing. This was a really interesting presentation!

Next week we will dive into Machine Learning, stay tuned!

Posted in Uncategorized | 4 Comments

Week 2

For our first assignment we had to use OpenCV to stream our webcam.  I was excited to start immediately. Fortunately their documentation was superb and right away I was able to use the OpenCV library and python code to stream frames from my webcam. I was actually kind of surprised how easy it was and that I had never heard/looked into this library before!

 

I decided right away to work on expanding the code and used the camshift method mentioned in the documentation to track a certain pixel distribution. First it tracked my shirt so i figured out the RGB values for my skin tone and ran the code again.

 

 

 

It successfully locked on to my face except this time the box was slightly skewed and it kept including my hair. Since I felt I didn’t have a full grasp of what exactly was happening I took a step back and tried to use the Haar Cascade method on a picture.

I was very encouraged by a successful run so applying what I had learned I finally got it working on a a video feed. Honestly I can’t describe how excited I felt. The fact that I would have never found this library if it wasn’t for the SWITCH program. I think I got complacent and something that HAS to be true for this field is that you should never stop learning.

 

Our first presentation this week was by Anil Vadathya and he told us about his work on using Gaze Detection to track children’s screen time. He told us that the current way of tracking screen time was through self reporting using a TV diary, but that this method was prone to error

.

So by using gaze detection he came up with FLASH – Family Level Assessment of Screen use in the Home. THe plan was to use gaze detection to keep track of when the child was looking at any screen and the result was that they were able to accurately keep track of this with the average accuracy being 95%.

 

Last but not least Jimmy Newland, a former RET participant, came to give us advice and talk to us about his prior experience with the program. You can tell that guy is an incredible teacher. He had this energy and real passion about the work he was doing and you couldn’t help but to want to learn more about it. He talked to us about his lesson using a camera to track your pulse and how he applied it to his classroom using arduinos. It made me both excited and nervous in regards to the end of the program lesson we have to submit, but I’m hopeful that with this group we’ll be able to get it done!

Posted in Uncategorized | 2 Comments

Another Teacher COVID Story: Ch.2 Many IDE Problems

Ill Tidings

It’s been such a strange time. It feels as though 2020 has taken another drastic turn and this time eyes turn to Texas in the tale of the COVID-19 infections. I cannot say I am not scared, even if not lethal, the damage is very real for those who exhibit symptoms. I have already had two scares now in the span of less than 3 weeks. It’s hard to focus on things and hunker down when now my own mother has been exposed because she works in the Hospital system here in Houston. By Tuesday, I will be back in my new apartment in Austin, away from family, away from this. It feels odd but, that’s not supposed to be the focus of this. I did learn and do some good things, despite how this opener might sound. There continues to be hope and small reprieves. Continue reading

Posted in Uncategorized | Leave a comment

SWITCH Your Perspective

When my colleague forwarded me an email about a summer computer science research experience at Rice for teachers I thought, “this MUST be a gimmick”. NO WAY they’re going to have a bunch of secondary teachers actually try something difficult. As I read more about the SWITCH program and its partnership with PATHS-UP I couldn’t contain the hopeful smile as I crossed my fingers and hit the submit button on my application. PATHS-UP vision is to make real changes in the health care outcomes of underserved populations by developing cost-effective technology solutions that can be delivered at the point-of-care(POC). Two things I care deeply about, revolutionary tech AND combating the crippling effects of poverty on the health of African-American and Latino populations in the US. Against the backdrop of COVID-19, the disproportionate effect of this virus on African American and Latino populations highlights the importance of that vision. We must be vigilant in sharing information in order to close the gap.  

Continue reading

Posted in Uncategorized | 2 Comments