Artificial Intelligence as a New Design Material & UX Design Innovation: Challenges for Working with Machine Learning as a Design Material
Both articles point out the challenges that AI have and why they fall short in satisfying humans in ways they expect to be satisfied. These points raised a whole new question in my mind, maybe slightly off-topic but somewhat still relevant. The AI receives stimulus and responds to it, just like the neurological system of humans. However, human behavior is not limited to the data we have gathered (i.e. experiences), we rely on the data of other human beings, the restrictions set by society, emotional responses and most importantly empathy. Without such inputs, these AI challenges arise. Out of all these, empathy is in my opinion the hardest one to replicate because there is no hormonal or physiological explanation for why we have this skill other than our evolutionary coding (I read this article because I actually got super curious as to why we empathize so if you are curious, there you go (https://www.psychologytoday.com/blog/ your-wise-brain/201003/how-did-humans-become-empathic) Even though we can transfer data from one computer to another, set the restrictions of society, teach emotional responses, I believe the step that sets humans apart from AI and makes them inefficient at human-to-human interaction and cause dissatisfaction (or sometimes even danger) in adaptive technologies is because they can learn from behavior but can’t predict expectations. “They may inadvertently display an inability to understand the intent behind users’ behavior, which results in “intelligent” features being perceived as useless and unintuitive” ” is one of the points made in Dove & Zimmerman’s article emphasizes exactly this point. Until then, they will most likely be used as a tool for data exchange and complex problem solving and continue to have the challenges discussed within the article. And I think, allowing the share of control is the most dangerous of all because if the controller programs the AI with malintent, the AI is not in a position to predict the ramifications of its behavior. And as to why it is hard to prototype with ML is again relevant to the points I made above. Machines currently lack common sense and are unable to make sense of data in a more complex and all-inclusive fashion like humans do, making them great at assisting us by providing us information but less than satisfactory as a dependable source of feedback.
Machines Learning Culture
When it comes to art, I have always wondered how conscious is the artist when he/she creates his/her paintings? I have always been curious if Artist B arranged his work according to the inspiration gathered from Artist A’s work or was it merely a coincidence that for example, they both painted the interior of a room with blue windows? Or are the critics reading too much into it? Here in a visual, Bazille and Rockwell’s paintings are being observed. It is true that the machine has figured out similarities in composition and subjects but it is still us that interpret what any of these findings actually mean. This is similar to Turing’s “Normalizing Machine”. Does the machine know that it is looking for normality or do humans interpret its algorithmic pattern as a proof for what we hope/consider to be normal? Is the machine really learning anything or is it a mere illusion on our part or wishful thinking towards the machine as it begins to respond in consensus to our expectations? In my opinion, the ones who reject machine assisted analysis and the ones who work with it are just changing the way they get to the result but are not changing the result itself because the result is just an interpretation of the data presented analogically or digitally. So at the end, the machine is not “learning” anything, it just begins collecting data in accordance to our expectations.