Found 2 projects
Poster Presentation 2
1:00 PM to 2:30 PM
- Presenter
-
- Megan Bui, Sophomore, Electrical Engineering, Bellevue Coll
- Mentor
-
- Richard Glover, Chemistry, Lane Community College
- Session
-
-
Poster Session 2
- Balcony
- Easel #99
- 1:00 PM to 2:30 PM
Fossil Fuels are associated with contemporary energy crises and climate change. The combustion of fossil fuels is leading to increased greenhouse gas emissions, which in turn have increased the overall earth’s temperatures and is predicted to grow at an alarming rate. Fuel cells are alternative, sustainable sources of energy that uses hydrogen (or hydrogen-rich fuel) and oxygen to generate electricity through electrochemical processes. I conducted a survey, focused on Bellevue College’s (BC) Chemistry Department, that indicated broad support for a simple hydrogen proton exchange membrane (PEM) fuel cell lab to be incorporated into the introductory chemistry curriculum. I will design a lab that will spark student interest in sustainability and expose students to real-world electrochemistry applications while addressing electrochemical, thermodynamic, transport phenomena, and clean energy concepts. The educational goals of this lab are to promote a deeper conceptual understanding of electrochemistry, to improve quantitative reasoning, and to improve explanations of observed scientific phenomena. I collaborated with BC’s Chemistry Department to determine learning outcomes and a systematic process to quantifiably assess fuel cell labs from other institutions. This information was used to design an effective lab and lesson plan surrounding fuel cells. Four fuel cell labs were evaluated: (1) A pre-constructed Hydrogen PEM Fuel Cell from Horizon Fuel Cell Technologies (2) A microbial fuel cell, (3) A fuel cell using platinum electrodes that is bathed in an acid solution (4) A fuel cell using graphite electrodes that is immersed in an acid solution. This research produced an economical and introspective laboratory experience that utilized basic laboratory equipment and materials. The results were presented in an engineering framework that details how aspects of the lab promote critical thinking and engagement, addresses learning objectives, and was cost-effective.
Oral Presentation 2
3:30 PM to 5:15 PM
- Presenters
-
- Min Jing (Wendy) Jiang, Sophomore, Computer Science, Bellevue Coll
- Megan Bui, Sophomore, Electrical Engineering, Bellevue Coll
- Abduselam Mohammed (Abdul) Shaltu, Senior,
- Samuel Vanderlinda, Sophomore, Computer Science, Bellevue Coll
- Tejas Rao, Non-Matriculated,
- Mentor
-
- Christina Sciabarra, Political Science
- Session
-
-
Session 2B: Machine Learning
- 3:30 PM to 5:15 PM
Reinforcement Learning (RL) is a subcategory of machine learning, in which an agent (the decision maker) observes its environment and executes the best course of actions to maximize rewards. This is similar to teaching a pet to perform tricks using treats as positive reinforcement. Our research compares different RL methods on low-performance devices like a Raspberry Pi in real-time, real-world environments. RL has gained popularity recently with breakthroughs from DeepMind’s paper, Playing Atari with Deep Reinforcement Learning, where an agent learns to play Atari games from raw pixels and from DeepMind’s AlphaGo (DeepMind, https://deepmind.com/research/alphago) program that was the first computer program to beat a world champion Go player. RL projects like AlphaGo have utilized big data, powerful computing resources, and simulated environments that do not require real-time interaction to train the machine learning models. Our group compares the effectiveness of different RL methods on an accessible level of computing power on offline devices that an average consumer could acquire. The team constructs a physical environment for the robot to navigate, creates an OpenAI Gym environment that our agents will use to control the robot and get feedback from the environment. We train our agents using different RL methods to optimally navigate the environment and avoid collisions. We then compare the performance of the different methods in our physical real-time environment. Reinforcement Learning in small, offline devices could pave the way for a variety of devices that learn over time without being connected to a network. Imagine a small Mars rover that learns to navigate its environment efficiently over time.