BPRO 25800 (Spring 2021/Winter 2024) Are we doomed? Confronting the End of the World

OP-ED LINK: BPRO End of the World Chatbot Op-Ed

For my final project, I decided to write an op-ed detailing some concerns about the integration of AI through chatbots. Since chatbots serve as a simulation of social interaction, I wanted to dive deep and discuss how these artificial conversations negatively harm our perception of social settings. By observing the push and pull of social conversations that happen in normal human interactions, and seeing how AI attempts to replicate that, I managed to identify some key parts of conversation that are lost in the process. I wanted to focus on not only the macro structure of how information is processed (and why that is harmful), but also the emotional idea of empathy that an artificial robot will be unable to replicate. These core concepts of dependent conversation can lead humans to having false beliefs regarding what a human conversation would entail in the real world, and I found that to be quite shocking when considering how that could impact future generations.

My motivation for this project stemmed from a general curiosity to artificial intelligence. With how abundance ChatGPT and targeted advertisements became last year, there was a large stigma regarding AI as a means to cheating the system. Essays could be written in seconds, and problems could be solved in a heartbeat; however, the information about AI felt very zoomed in on informational content. In our class discussion, we did spend a lot of time discussing the impacts of automation and how that could affect society. Something a human could take minutes to do, AI could solve it in seconds and still have time to answer any follow up questions. This was definitely interesting to me, but I found myself more captivated with what would happen if AI tried to do something only humans could do. Replicating a math question’s solution is something any machine can do – but what about something innate to the human spirit? Emotional values like honesty, optimism, humility, and love are innate to how human connections are formed at their core. The importance of human connection has been studied for generations to bring forth prosperity and success – but replicating this process through AI could not be as smooth as one would believe. It was at this thought where I became curious on how AI attempted to branch out to (attempt to) become more humane. The idea that a robot would try to become more human to relate to other humans fascinated me, and it pushed me to see what lied beyond basic assumptions.

While focusing on AI chatbots’ relationship to human conversation would have been fascinating as it is, I couldn’t ignore the concept of the machine running behind the scenes. Seeing as a machine is meant to filter the proper information to speak the right words at the opportune times, there must be some kind of information that the AI is pulling from to create such varied responses. Yes, a human providing that context is sometimes enough, but what happens when the resource pool the AI is pulling from becomes littered with misinformative content? Now I was twice as motivated to understand the system, if nothing else to have a more knowledgable opinion on it all. Learning about cybersecurity and data poisoning related a lot to what we were able to discuss in class regarding misinformation and its spread through social media. The act of information being spread beyond its private sector to make a profit or capitalize on its audience resonated true with chatbots as well. Making economic success off a tool that was meant to be used as a simulation for interpersonal conversation was definitely shocking at first glance. However, the safety of these tools was something I had not considered prior to learning about chatbots – and this can lead to more drastic problems such as privacy, malware, and corrupted data sources for the chatbots to operate.

When we consider how our generation has been the most acclimated to technological advancements, it is no surprise that the future can be both optimistic and disheartening. The tools that we have made for ourselves have been absolutely generational. I’m sure I speak for many when I say that things like Google, Email, and Instant Messaging are absolute necessities for day to day lifestyles. However, when we consider how some technological advancements can lead to feeling worse about our human lives, the table starts to flip upside-down. Studies discussing social media and how it can have a negative effect on self-esteem have called into question how much technology is too much for us to handle. The complexity of our current generation means that a solution might not be as easy to find as it was for those in the past. With more resources, there is always a chance to cause comes more problems. Studying AI chatbots and social connections served as a great tool to get a tighter grasp on what technology could mean for our future.

I found it quite ironic to discover how in the pursuit of an easier lifestyle of accessible knowledge, more problems began to arise. A mechanized computer was designed to spew out more information for us to analyze to solve these issues, but we might be overestimating human capability in this field. As such, AI is young and still developing every day, so being able to witness the seeds of its growth is quite exhilarating.

All in all, the project was an absolute joy to work on, and I am very thankful for the opportunity to research and dive deep into a topic of this nature. Using what we learned in class was extremely insightful to formulate my own thoughts and opinions in this field, and combining both I learned regarding AI and misinformation was a smooth combination. While our future of AI and misinformation may be uncertain for years to come, I know for certain that this is not the last, nor the first, technological challenge we have to face head-on. I hope you all enjoy what I have to speak on as much as I enjoyed writing it!

A Guide to Build an AI Chatbot | Intuz

Above: photo of AI chatbot, ready to serve the user with any immediate need.

Sources used are linked in PDF

 

Scroll to Top