Doc Dingle's Website
Brent M. Dingle, Ph.D.

Mentoring - People and Projects

While working in academia and industry I try to invest time and energy towards helping undergrads, grads, and interns learn and succeed. Many of these endeavors involve students, or interns, working on projects. Occassionally, it also leads to published papers.

While I would like to list all the projects, doing so would require more time than I currently have. I hope to add more as time allows. In this posting I do not mean to deliberately exclude anyone and I cannot fully express how wonderful all my mentorees have been. While I am supposed to be the teacher, they have taught me many things along the way. These few examples cannot fully describe all the hardwork each and every one has put forth.

Reality Capture - Intern - Trent Riek - 2021

Picture showing Trent performing laser scanning

Trent Riek under the guidance of Nick Castillo and myself learned how to perform terrestrial laser scanning. An image of him taking a scan of my house is shown here.

Picture of scanned head placed in AR on a stool bench

Trent also worked on a project to convert models and scan results into USDZ files. An example of such is derived from a near-range scan taken of my head. If you follow this link on an iDevice, or on some other mobile devices, you should be prompted to view the model in Augmented Reality (AR). Doing so will allow you to place my head anywhere you deem acceptable. Once placed you can then walk around it and view my bald spots from a variety of angles.

Two versions of the presentation Trent and I put together describing his adventures in scanning, and offering a brief introduction to scanning processes can be downloaded for your viewing pleasure:
[LaserScanInternTrent2021.pdf] or [LaserScanInternTrent2021.mp4].
The mp4 version shows the animations incorporated into the presentation.

Automated Background Filtering - Intern - Avery Link - 2021

For this project, intern Avery Link had to learn about AI/ML and image processing. She was constrained to use Python, openCV, HTML, nodeJS and JavaScript. Hardware was limited to a webcam and a laptop computer. Myself, Colette Eubanks, and another intern: Alec Moore provided assistance, data, models, and guidance as needed. The role of customer for the project was played by my co-worker: Keith Janasak.

Cartoon showing a broken machine and an onsite worker calling a remote SME

The motivation for this project was to prevent accidental data spills during live video conferencing. The cartoon image here serves to describe a potential scenario, where a piece of hardware has broken down. The onsite worker calls a remote Subject Matter Expert (SME) and is transmitting a live video to the SME. Unfortunately behind the broken hardware, and captured in the video, is some other sensitive data. This sensitive data is such that transmitting the video results in an accidental data spill - a security risk.

Automatically preventing such accidental data spills would be useful in many different business settings. So a project was devised to see if such could be accomplished - in less than 3 months.

In sum: the goal of the project was to automatically recognize an object of interest in a live video stream and blur out everything else. To accomplish this required each frame of the video to be captured, processed, and sent on to the receiver. Thus the sensitive data in the background would never leave the local computer, and no data spill would be possible.

To accomplish this goal required a way to recognize the object of interest. Several different methods were tried, eventually focussing on a machine learning methodology.

Conceptually this problem is similar in nature to blurring the background behind a person during a video call. Which Avery also implemented, as shown to the right.

Clicking on the cartoon image should link to an animated movie sequence describing the motivation of the problem.

This project resulted in a conference presentation and published paper:

  • Using Machine Learning to Focus on an Object of Interest During Remote Collaboration
    2022 Annual Reliability and Maintainability Symposium (RAMS), 2022, pp. 1-7,
    doi: 10.1109/RAMS51457.2022.9893926

    co-authors: J.C. Eubanks, K. Janasak, A. Link, and A. Moore
    [pdf]

STEM Training App - Intern - Colette Eubanks - 2019

This project was designed to help high school, or younger, students understand the relationship between typical blueprint drawings and their equivalent 3D models. It also demonstrates how most people understand 3D models more quickly, and easily, than blueprint drawings.

Blueprint image showing 3 different perspectives of an unidentified object

This was one of many projects Colette completed as an intern. At the time, she was completing projects faster than I could think of new ones. She eventually became a fulltime employee and a work colleague from circa 2019 through 2023. While her career has taken her on another path, she remains a true friend.

The app was designed to be used by an instructor walking an audience through various models, discussing various differences. This video demonstrates the application in action.

Video Tracking - Intern - Daniel Reiling - 2019

Picture showing orb-eye-thing tracking a yellow ball being thrown

This project allows a virtual eyeball to track a user selected color. In this example it tracks the yellow ball being tossed around.

The video is from a simple webcam, the code is written in HTML and JavaScript. The code did have provisions to translate from the webcam's position to the estimated physical location of the eyeball on the screen. But the effect worked best when the placement of the web-browser's window was somewhere near the location of the webcam.

This project was coded by Intern: Daniel Reiling. The model is from the mind of Colette Eubanks, any similarity to any existing, or otherwise imagined, object is coincidental. Here is a video showing the app in action. You may need to download a player such as the VLC media player to properly view the video.

The true purpose of this project was to create a simple framework to allow quick-turn-around times for testing and demonstrating new and existing tracking algorithms. This capability was accomplished through modular design of the javascript code. Note this is most useful for very early conceptualization and testing - and it is limited to just using a webcam.

Educational Gaming - Undergrads - Project to Paper - 2014

The Trial of Galileo: a Game of Motion Graphs
by Ian Pommer, Michael N. Flaherty, Alicia Griesbach, Bryant Seiler, John Leitner, Dylan Tepp

Game image - Trial of Galileo

This game was designed and implemented by a team of undergraduate students for coursework at the University of Wisconsin - Stout. Each student was enrolled in the Game Design and Development Program at the time. This was such an innovative game, I encouraged them to write a paper on it. They decided they would. I helped them write and submit it. I then pursuaded two of them to present at the conference.

Ian Pommer, Michael Flaherty and myself then went to the CHIPLAY 2014 conference in Toronto, Ontario, Canada. While the game did not win the student game competition, everyone had a good time. It was a splendid learning experience and was well received by the students.

The paper can be found online: [pdf] [pre-print pdf]

Unfortunately the game was constructed using Adobe Flash Player. This has been designated at end-of-life by Adobe effective Dec. 31, 2020. So the best way to experience it now is through the video submitted to the conference.

Audience Mood Detection
Interns - Crystal Hernandez & Kendal Wiggins - 2022

This was a summer intern project undertaken by two interns: Crystal Hernandez and Kendal Wiggins. It was motivated by a desire to determine if the mood of workers in a factory area or similar environment could be detected and tracked in a computational and statistical way. The intent was to focus on the group as a whole. The premise was simple: if the majority of the workers appeared tired, or depressed, for an extended period of time (days) then the probability of unsafe work conditions might manifest would be higher. If there were a way to track this then there may be a way to change the mood or conditions to be more positive or active. Thus encouraging a safer work environment for all.

Slideshow image: Graph of audience emotion, flow diagram of image processing

The scope of this far exceeded the 8 weeks of time available, so it was scaled down to a simpler case of detecting the average mood of a small to medium audience. This average would then be automatically displayed to the presenter. The intent would be to assist the presenter in determining if the audience was actively listening and understanding whatever material was being presented. This would augment the presenters perception of reality by automatically providing continuous real-time feedback to the presenter.

This project walked into several unexpected obstacles and learning opportunities. Most of these revolving around social perception of video cameras and monitoring of human states. So everyone was offerred the chance to experience and explore ethical and moral considerations of software development and design.

The project was done mostly in Python, using pre-trained AI/ML libraries. Determining how to average the results, the rate of polling the audience, and similar issues were considered. The hardware used was a webcamera and a laptop computer.

By the end of the 8 weeks, the interns had successfully created a program to detect the average mood of people in a meeting area. This was tested on a small audience, with promising results. Time ran out before any major testing could be undertaken, but it may be continued by others.