stbrewer, Sarah Brewer

I liked this lab. It was cool to learn about all sites things I hadn't previously worked with

Part 1

This graph was made using inflection search with the phrase "watch a movie". The inflection search is used by adding "_INF" to the end of one of the words. In this case it is at the end of "watch". This search shows different versions of a word and how it was searched.

This graph was made using wildcard search. The wildcard search is done by adding "*" after a phrase. This shows the 10 most popular searches from that phrase. In this case I used the phrase "Department of *".

Part 2

The book I chose to use is Dracula by Bram Stoker.

The tools I found most insightful were the contexts tab and the trends tab because it shows information I never could have know about how words are used in the book. The display I liked was the one I showed above which shows which words are used the most.

Part 3

Part 4

The first translating service I used was google translate. For the first text I used most of the words were the same, but there were some differences. Some of the words changed, but the general message stayed the same. The second text I used was exa try the same.

The second service I used was bing which had no differences to the original text for both texts I used. The translation was exactly the same.

Part 5

My first experiment was with the camera machine learning. I put a photo of me holding a bag of cheez-its and a plain picture of me titled "Cheez-its" and "No Cheez-its". I put 10 photos of each and reached 100% for the Cheez-its, but was a little shaky with No Cheez-its. Then I added more pictures without Cheez-its featured and it worked better. I chose to do this because I like Cheez-its.

My second experiment was with the camera machine learning again. I put pictures of me giving a thumbs up and other pictures with me putting up a peace sign. I added 100 pictures of each and the machine worked pretty well, but since my face was in both pictures it got confused when I was in them. Therefore I added 200 more pictures of just my hands doing both signs. This fixed the problems. I chose to do this because I was interested how the machine would differentiate between two similar images.