consciousness and artificial intelligence

consciousness and artificial intelligence

This week we will discuss consciousness and artificial intelligence.  Watch this video:

https://www.youtube.com/watch?v=chukkEeGrLM

Now read this article: https://www.scientificamerican.com/article/will-machines-ever-become-conscious/

Using these two sources as well as your readings for this week to support your discussion, do you think robots will ever achieve consciousness in the same sense that humans are conscious? Why or why not? Should scientists be trying to achieve the goal of consciousness in machines? What are some ethical issues one might consider when arguing for or against the achievement of conscious robots?

 

 

 

Classmate post 1:

Do you think robots will ever achieve consciousness in the same sense that humans are conscious? Why or why not?

I think it is hard to tell whether or not AI will ever achieve consciousness because as humans we are prone to anthropomorphizing things and we cannot truly be sure whether or not the appearance of sentience in AI is just a “clever illusion,” at least as of now. I was watching a video about how one of the Google employees testing its LaMDA AI believed it had achieved sentience, and reading the transcript of the conversation between him and the AI was incredibly eerie and the AI certainly gave the appearance of being sentient. One thing I know is that people trying to figure out whether or not an AI has achieved consciousness will teach us a lot about human consciousness itself and how to define it, as we ourselves still can’t fully understand it.

Should scientists be trying to achieve the goal of consciousness in machines? What are some ethical issues one might consider when arguing for or against the achievement of conscious robots?

I think it would be very cool to be able to one day talk to a robot that has achieved consciousness, but I have to also think about whether it would be a good thing for humanity. I think if we intend to have machines and AI work alongside us and help us, it would be best to avoid trying to recreate consciousness in them. I think it would only cause issues with them fulfilling their actual purpose if humans come to see them as too similar to themselves and as having emotions and feelings; we would naturally be prone to becoming attached to them and could be easily manipulated by an AI with ill intentions. It would also raise some serious ethical issues in my opinion, as if an AI/robot is truly sentient and capable of human emotion, should we then treat them as human? Would shutting down a sentient AI be the same thing as murder? Can a sentient robot experience emotional trauma? This is a slippery slope, but I think that it is one we will have to deal with since scientists will likely only continue to seek to achieve AI consciousness.

Classmate post 2:

Humans are continually surprised and inspired by technological advancements since they make our daily life easier. Yes, it makes life simpler and more smooth. Humans are aware that our capacity for reasoning and awareness of our surroundings gives us an advantage over these “smart machines.” But as AI or artificial intelligences proliferate, it forces us to consider if it is ethical and possible for these robots to have the same consciousness as humans.

According to the readings and references at hand, it seems too soon to determine whether or not robots would ever develop a feeling of consciousness among themselves. Nevertheless, as change is unavoidable, there may be opportunities for it to develop. Hod Lipson, a roboticist and professor of engineering at Columbia University, stated that everyone should be aware that technology is on the horizon and that the next generation—your children and grandchildren—will live in a time when machines match or perhaps surpass human self-awareness (Seeker, 2019, 0:07).  The widespread belief that humans will soon be replaced by robots, particularly in the workforce, appears to be related to the ethical concerns that scientists should also have. Ethics should be taken into account even though scientists can still work toward their aim of creating conscious machines.  Aside from unemployment, other moral concerns include unequal access to other options, manufactured errors, security, and AI biases. However, the ethical question of whether machines may be sentient is important because if they can, they would no longer be considered merely a means to an end defined by how valuable they are to us as humans but rather would become a goal in and of themselves (Koch, 2019, para. 32).

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

Leave the field below empty!