Machine Learning/Artificial Intelligence

Intelligence on Tap:

The prospect of using Machine Learning or Artificial Intelligence as a “design material” is an exciting prospect, one with potential that we likely can’t even anticipate fully.  However, I think the challenges that the author brings up (and still others he did not) need to be addressed before ML/AI can/should be “on tap.”  These challenges are: Designing for transparency, Designing for opacity, Designing for unpredictability, Designing for learning, Designing for evolution, and Designing for shared control.  I think, especially that it is critical that users should always be informed of the use of these systems and the unpredictable nature they embody.  Also, I think it’s worth mentioning that no matter how “intelligent” these systems may be, they should, on principle, not be able to override the authority of its user.  This would make the creator of the system liable for any and all havoc caused by the machine.

 

Challenges for Working with Machine Learning as a Design Material:

“We did not see research investigating issues such as the impact of false positive and false negative responses from agents, or the need to collect ground truth labels, which might negatively impact UX.”  This is something that I had been thinking about when reading the previous article: It sounds so grand when people talk about the power and efficiency provided by machine learning associated with big data powerhouses like Amazon and Google, but what about when it is accidentally or intentionally trained poorly?  I think the chatbot experiments proved why this is a huge problem.  People may lie to the machine, which has no way of distinguishing true from false statements provided by its user.  It’s possible that there may be a way to train the machines to account for this in the future, but I assume that would mean teaching it to lie itself, to understand lying, which would be incredibly problematic and would destroy any trust in the machine that users could build up over time.
Machines Learning Culture:
For this article, I wanted to comment on the part regarding “normalcy.”  The Turing Normalizing Machine is designed to somehow identify what makes people “normal,” and the creators hope to decode the mystery of “what society deems ‘normal.'”  But here is the problem with this goal: It will never be truly solved.  Each person has their own beliefs and biases of what normal is, which in turn affects what they consider to be normal appearances.  In consequence, the machine will be getting flawed and contradictory data.  Sure, they can construct and image of what the machine thinks is the most “normal” appearance, but even if it could sample the data of every person alive, it couldn’t choose a form that satisfied everyone’s “normal.”  In other words, people may see the machine’s aggregate person and think it looks weird, and they would be neither right nor wrong, as what is normal to them is different to people of different locations, beliefs, age, gender, etc.  All this to say: It’s a fun little project, but will never “decode the mystery,” and has no real, practical value other than ironically pointing out all the differences in perspectives about “normal” that prevent it from being ever universally accepted.

Sengers and Blythe Readings

Sengers–“The Engineering of Experience”

I found the explanation of the transition of casual work/home balance to distinct work/home separation, in order to increase efficiency in that work, to be quite interesting, but I do have a hesitation: Although I agree partially that there is something lost by adopting “Taylorism” in its purest form, the author doesn’t really provide a sufficient argument as to why increasing efficiency in the workplace or at home with “fun” is inherently wrong.  The author claims that, “as a culture we need to consider systems that take a more integrative approach to experience,” but does not even try to prove why this assertion is true.  On one hand, I agree that it can be nice to be more relaxed about the boundaries of work/play, allowing for more neutral experiences, but on the other, I love the aspect of separation that allows me to clock out at the end of a work day and not have to be thinking about work when I am home with my family or hanging out with friends.  I love this quote: “Human behaviour is rich, complex, messy, and hard to organize into rules and formal models.”  But I think the author may be ignoring the fact that some people enjoy striving for efficiency and organizing their lives with “to-do lists” and “schedules.”


Blythe–“Making Sense of Experience”

I enjoyed reading this excerpt, but I will say it was a bit of an exhaustive “experience” (pun intended) navigating through the highly-specific philosophical or etymological discussion.  I think that I agree with most of what the author is asserting, but there was one thing that I was not to sure about.  Perhaps I misunderstood something, but it seems to be a contradiction when, at the beginning of the paper, it is suggested that “experience” should not be confused with subjective perceptions, but later, several examples of subjective aspects of experience are provided (The sensual thread, the emotional thread, making sense in experience, interpreting, etc.)  The author even states at one point, “People do not simply engage in experiences as ready-made, they actively construct them through a process of sense making.”  In other words, the experience is subjective to how people make sense of them.  Experience may contain other aspects that are objective, but it definitely contains lots of subjectivity.

Sidenote–I thought this quote was particularly interesting: “we cannot design an experience. But with a sensitive and skilled way of understanding our users, we can design for experience.”  This is probably a good observation to keep in mind while studying “User-Experience Design” as a class.

Week #1 Readings

The Engineering Experience

In Phobe Sengers’ “The Engineering Experience,” she made the argument that designing a system extended further than the typical “engineering model” but combines an interdisciplinary approach to technology with design, philosophy, and culture. She spoke a lot about how “fun” and “work” are such separate things that work. That we, as humans, have done so well at maximizing our actions that we come “mindless” while working, branding it as “taylorism”. I do agree that sometimes work can become mindless that you could do it in your sleep. I feel it is our responsibility as creators to consider the human aspect/touch to the systems, experience, and physical items we create. I agree with Sengers’ final thought about how systems should focus on the users’ ability to “engage in complex interpretation using a vast amount of cultural background knowledge” and their reactions, as it would create a more cohesive experience for the product created and more successful system overall.

Making Sense of Experience

The 2nd article went about trying to break down what creates an experience and what an experience is. However, the framework and threads of experience makes the topic quite confusing overall to read and try and understand. The method of dissecting what creates an experience leads to the conclusion that designers are unable to create an experience but we are able to design for an experience. I definitely do agree that we could never create an experience for our users, as users are the missing piece to every experience we design, and there the users are able to interact and explore the potential the in the scenario. It is important to understand some of the framework presented by the authors, however it is always a more free flowing and fluctuating variables.

 

 

Thoughts on “The Engineering of Experience”

I found it interesting how “The Engineering of Experience” reflects on the adoption of Taylorism in Western culture. As a Computer Scientist, I have always focused on the benefits of efficiency and optimization, but never did I take a moment to consider its negative influence and impact on human experience. Fundamentally, I think it all comes down to the fact that as a society, there is a complex cycle of supply and demand that needs to be fulfilled. Noting this, I believe Taylorism emerged with good intentions as an approach to solve a problem; however, at the time of its invention, as a society, we did not have enough insight to predict its future implications.

From a Systems and HCI point of view, I think The Engineering of Experience touches on something absolutely spectacular: “Instead of representing complexity, bootstrap off it. Humans will naturally respond to stimulus with a complex behavior (from the perspective of a computer); therefore, if a system can feed off the complexity of human behavior, then the internal model of the system itself doesn’t have to be that complicated to create an immersive experience.

-Milan Bhatia