Claire

Image Classifier

https://editor.p5js.org/Claire_Z/sketches/knHYIsoSj

I made a audio player with hand gestures. You can play, pause, switch audios, and change the volume with different hand gestures. To avoid the program repeatedly reads one gesture, I had an empty state. All the other functions will be triggered when empty is true. In other words, it only read the first label after the “empty” label. But sometimes, the first label it recognized is not the one that I show to the camera, so the audio play take wrong action. I also find that the model seems more accurate when running in teachable machine than in p5.

Teachable Machine.mp4

p5 music player.mp4

image.png

image.png

image.png

Excavating AI

The article “Excavating AI” talks about how might AI model be biased due to bad labeled training images. An example in the article is that a man drinking a beer used to marked as “alcoholic”. But later, this tag is removed. It seems like it’s very important what are we teaching the model or who are teaching the model. It’s similar to guide a young child to form a correct value. However, there is not a fixed answer of the “correctness” of one’s value. And it’s not the problem only restricted to training an AI model. AI have bias because the we are living in the world with bias. And we form the stereotypes in the similar pattern with machine learning, which are both learning from the provided data. Take the example of the alcoholic, from the former experience and also the news we heard, as well as the stories shared among people, we learn that there there are the possibilities of a man holding a bottle of alcohol is an alcoholic, and an alcoholic is possibly dangerous. So what I’ve learned during the period of understanding the world is that it’s wiser to stay away from a people who seems to be drunk. The ability to tell if a person is drunk is also learned from the live experience.

I think it’s very possible that we cannot get rid of stereotypes. To avoid bias in the machine learning model, it’s important to define what stereotypes are harmful and what are not. However, this is also a very objective question.