Reading #2 Response and Ethical Implications

AI as a New Design Material:

The author importantly notes that neural networks require huge amounts of learning data. Additionally, the data needs to be differential: both correct and incorrect so that the network learns the difference. I have seen many people make false assumptions about machine learning due to not understanding this principle. Due to the ubiquity of the buzz words, many equate ubiquity to its implementation. However, a great deal of work is involved which the author addresses such as defining correct solutions, time allotment to train and test the neural network, and immense data acquisition. You require twice the amount of data you might initially assume you need, not to mention that humans might have to prepare. The author notes some interesting solutions to this such as image recognition during a login screen, which I had not even considered to be for computer vision.

 

As a side note, another function for GPUs, which the author does not reference, is mining for bitcoins. Hence, with such high demand for GPUs concurrent with both AI and bitcoin especially, prices have become astronomical, with supply remaining intentionally low.

 

Back on track, another interesting concept the author notes is the magnitude of data tech companies harvest from people. They measure virtually all of your behaviors when you are using their products. This brushes on the principle of privacy; as more devices become sensors on our behaviors, to what extent these devices influence us when recommending our future behaviors, such as through advertisements or recommendations? It is interesting to see the author takes the approach that more machine learning is preferable. Clearly, the more data the better for the network, but the cost is people’s personal privacy. When one designs AI, one must also take into consideration the ethical implications involved. Are people consenting to the use of their information? Does profit emerge at the expense of privacy? Will nefarious actors use this information? The analogy to coffee brewing does not suffice. The author should have mentioned how designers must also take into consideration the security of the learning system they build. How are they preventing others from extracting the data? How might they prevent another machine from learning from the machine they taught? One might consider these scenarios when building the necessary infrastructure.

 

Challenges for Working with Machine Learning as a Design Material:

I appreciate that the authors note that unfamiliarity with ML is a source of apprehension to integrating ML. I sympathize due to my own unfamiliarity. I have not been exposed to machine learning in my CS curriculum, as the class has a few prerequisites and is not required, and as such understand that while ML grows in pertinence, curriculum always has to catch up rather than pioneer. Thus, a solution I find interesting is that classes such as these are integrating ML rather than dedicating an entire semester to parse its inner workings.

 

The authors also briefly mention an interesting field that could have many ML applications: medicine. The article specifically mentions detection of depression. Personally, I have seen many interesting projects try to address condition detection with intelligent systems. One such project involved an Xbox Kinect and Microsoft’s Azure cloud services. A user would pace back and forth a certain number of times to detect, if I remember correctly, stroke or some other kinds of brain injuries. In reference to the first article, this is another example of gaming technology influencing the capability of other non-entertainment fields.

 

Another interesting topic the authors discuss is the comfortability. Again, we should also consider the expense of pandering to the consumer. What if the machine learns to perpetuate culturally insensitive behaviors? What if, as in the case of Microsoft, the machine learns to be racist? The machine has a potential to encourage or reinforce societal degradations. Further, should we then be obligated to train out such behaviors, even if users are more complacent with a machine that matches their behaviors?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s