The future of generative AI and its ethical implications 

The future of generative AI and its ethical implications 

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.



Generative AI is revolutionizing how we experience the internet and the world around us. Global AI investment surged from $12.75 million in 2015 to $93.5 billion in 2021, and the market is projected to reach $422.37 billion by 2028.

While this outlook might make it sound as if generative AI is the “silver bullet” for pushing our global society forward, it comes with an important footnote: The ethical implications are not yet well-defined. This is a severe problem that can inhibit continued growth and expansion. 

What generative AI is getting right

Most generative AI use cases provide lower-cost and higher-value solutions. For example, generative adversarial networks (GANs) are particularly well-suited for furthering medical research and speeding up novel drug discovery

It’s also becoming clear that generative AI is the future of text, image and code generation. Tools like GPT-3 and DALLE-2 are already seeing widespread use in AI text and image generation. They have become so good at these tasks that it’s nearly impossible to distinguish human-made content from AI-generated content.

Event

Intelligent Security Summit


Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now


The million-dollar question: What are the ethical implications of this technology?

Generative AI technology is advancing so rapidly that it’s already outpacing our ability to imagine future risks. We must answer critical ethical questions on a global scale if we hope to stay ahead of the curve and see long-term, sustainable market growth. 

First, it’s important to briefly discuss how foundation models like GPT-3, DALLE-2 and related tools work. They are deep learning tools that essentially try to “outdo” other models by creating more realistic images, text and speech. Then, labs like OpenAI and Midjourney train their AI on massive datasets from billions of users to make better, more sophisticated outputs.

There are numerous exciting, positive applications for these tools. But we would be remiss as a society not to recognize the possibility of exploitation and the legal gray areas this technology exposes.

For example, two significant questions are currently in debate: 

Should a program be able to attribute the results to itself, even though its output is derivative of many inputs?

While there is no universal standard for this, the situation has already come up in legal spheres. The U.S. Patent and Trademark Office and the European Patent Office have rejected patent applications filed by the “DABUS” AI developers (who are behind the Artificial Inventor Project) because the applications cited the AI as the inventor. Both patent offices ruled that non-human inventors are ineligible for legal recognition. However, South Africa and Australia have ruled that AI can be recognized as an inventor on patent applications. Additionally, New York-based artist Kris Kashtanova recently received the first U.S. copyright for creating a graphic novel with AI-generated artwork.

One side of the debate says that generative AI is essentially an instrument to be wielded by a human creator (like using Photoshop to create or modify an image). The other side says the rights should belong to the AI and possibly its developers. It’s understandable that developers who create the most successful AI models would want the rights for content creation. But it’s highly unlikely that this will succeed long-term.

It’s also important to note that these AI models are reactive. That means the models can only “react” or produce outputs according to what they’re given. Once again, that puts control into the hands of humans. Even the models that are left to refine themselves are still ultimately driven by the data that humans give them; therefore, the AI cannot really be an original creator. 

How do we manage the ethics of deepfakes, intellectual property and AI-generated works that mimic specific human creators?

People can easily find themselves the target of AI-generated fake videos, explicit content and propaganda. This raises concerns about privacy and consent. There is also a looming possibility that people will be out of work once AI can create content in their style with or without their permission. 

A final problem arises from the many instances where generative AI models consistently show biases based on the datasets they are trained on. This may complicate the ethical issues even further, because we must consider that the data used as training input is someone else’s intellectual property, someone who may or may not consent to their data being used for that purpose.

Adequate laws have not yet been written to address these issues around AI outputs. Generally speaking, however, if it is ruled that AI is simply a tool, then it follows that the systems cannot be responsible for the work they create. After all, if Photoshop is used to create a fake pornographic image of someone without consent, we blame the creator and not the tool. 

If we take the view that AI is a tool, which seems most logical, then we cannot directly attribute ethics to the model. Instead, we have to look deeper at the claims made about the tool and the people who are using it. This is where the true ethical debate lies. 

For example, if AI can generate a believable thesis project for a student based on a few inputs, is it ethical for the student to pass it off as their own original work? If someone uses a person’s likeness in a database to create a video (malicious or benign), does the person whose likeness has been used have any say over what’s done with that creation?

These questions only scratch the surface of the possible ethical implications that we as a society must work out to continue advancing and refining generative AI. 

Despite the moral debates, generative AI has a bright, limitless future

Right now, the reuse of IT infrastructure is a growing trend fueling the generative AI market. This lowers the barriers to entry and encourages faster, more widespread technology adoption. Because of this trend, we can expect more indie developers to come out with exciting new programs and platforms, particularly when tools like GitHub Copilot and Builder.ai are available.

The field of machine learning is no longer exclusive. That means more industries than ever can gain a competitive advantage by using AI to create better, more optimized workflows, analytics processes and customer or employee support programs. 

In addition to these advancements, Gartner predicts that by 2025, at least 30% of all new drugs and discovered materials will come from generative AI models. 

Finally, there is no question that content like stock images, text and program coding will shift to being largely AI-generated. In this same vein, deceptive content will become harder to distinguish, so we can expect to see the development of new AI models to combat the dissemination of unethical or misleading content. 

Generative AI is still in its early stages. There will be growing pains as the global community decides how to manage the ethical implications of the technology’s capabilities. However, with so much positive potential, there is no doubt that it will continue to revolutionize how we use the internet.

Andrew Gershfeld is partner of Flint Capital.

Grigory Sapunov is CTO of Inten.to.


DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers