The flaws in AI - Part 1
I have a lot of thoughts about "AI", Large Language Models, GAN's, and all the other tools that are spinning around the tech space at the moment. Fortunately, I'm in the process of getting a masters degree from the University of Pittsburgh, and taking a class on current topics in computing. For this class I am planning to study the problems with AI, specifically in the creative space. I'm going to post my thoughts, and progress submissions here as well. Below is the first submission for the class, which is a broad overview of my thoughts on the topic.
Large language models like GPT-3 have notable flaws, including bias, a lack of common sense, ethical concerns, and environmental impact. Addressing these issues is essential for responsible AI development and usage. In addition to the flaws in language models, AI image generation techniques, such as GANs (Generative Adversarial Networks), also raise significant copyright concerns. These models can generate visually convincing but entirely fabricated images, which can infringe upon copyright protections by creating derivative works without proper authorization. Additionally, the potential for these models to inherit and amplify biases present in training data, leading to biased or inappropriate image generation, can exacerbate copyright-related issues and ethical concerns in the context of image creation and ownership. In addition to the flaws in language models and AI image generation techniques, AI programming tools may also introduce challenges such as code biases and errors, potentially leading to unintended consequences. These challenges underscore the need for robust copyright regulations and ethical considerations in the development and deployment of AI technology.
The current suite of AI tools is inarguably changing the world of computing, but their usage in this early stage of development is fundamentally flawed. This point is surprisingly well summarized by ChatGPT-3.5 in the opening paragraph of this document. Through this course I would like to explore the downsides and limitations of current AI technology and how those downsides and creating secondary effects when these tools are being used.
I do not consider myself a luddite, however I think any new technology deserves to be greeted with excited skepticism before it is heartily embraced. Crypto, NFT’s, and Web 3.0 are perfect examples of technology that created an excitement bubble, and when that bubble popped the practical uses are still to be found. I do not think “AI” is a concept that will fade as quickly as NFTs, it is the exact opposite, something that will stay with us in one form or another for the rest of computing. This fact is the exact reason I think the problems with its current implementation and usage need to be evaluated, and used to guide its growth, instead of blindly charging ahead in its development.
Top Older ->