Generative AI: How can businesses use it in a responsible way?
In recent months we have witnessed the rise of generative AI in different fields and through different applications. Undoubtedly, this technology has capabilities and functionalities that make it very attractive. Its outstanding ability to generate images or text can lead to its use in different business areas. Given the rise of these systems, a debate has also arisen regarding the commercial use of these tools and the products they deliver.
In recent months we have witnessed the rise of generative AI in different fields and through different applications. Undoubtedly, this technology has capabilities and functionalities that make it very attractive. Its outstanding ability to generate images or text can lead to its use in different business areas.
Given the rise of these systems, a debate has also arisen regarding the commercial use of these tools and the products they deliver. While some argue that generative AI can and should be used intensively in different industries, there are those who consider that it is highly irresponsible to make use of this technology at this point. Neither position seems to be reasonable. Generative AI can be used in some cases, but there are risks that must be carefully assessed before doing so. Therefore, we recommend starting by looking at these three points before deciding how to make use of AI in your business:
- Understand how AI works. This AI is not like other AI systems we have seen before, especially machine learning systems. Its features are different, its training methodology and data processing mechanisms are highly innovative. If we understand the new features embedded into a generative AI system, we can understand how it achieves the results it does and what its true capabilities and limitations are. Likewise, this will allow us to understand that we are still dealing with a machine with limitations and in constant development. Many of the AI systems that are now available are still in a beta phase and are still being tested.
- Read the small print. When accessing some of these systems it is important to read the terms and conditions of these services. In these terms we will find valuable information about the liability issues set by developers of this technology, the obligations assumed by the user and the limitations that may exist regarding the use of images and texts generated with these systems. This is really important when defining issues such as intellectual property protection and the risks of infringement that may exist by using the text and image these systems generate. This will also allow you to have elements to decide those cases in which it is necessary to disclose if a text or image has been developed using this technology.
- Autonomy by default does not exist. The level of autonomy and decision-making that we give to this technology is first and foremost a human decision. The generative AI will have the freedom and autonomy we want to grant it and there is no autonomy by default. For now, we can see that the system requires supervision and functions as a tool that supplements the work of various professionals, and no decision should rely exclusively on the input these systems provide. This will undoubtedly have a disruptive effect on work relationships and the way in which obligations of supervision, care and responsibility in the workplace are determined. Setting a clear work policy on how this technology adoption process should take place will turn out to be relevant at this stage.
AI will undoubtedly have an effect that we have not yet fully comprehended, but there are already some elements in which the law will play a fundamental role in ensuring that this technology is used appropriately, ethically, and responsibly. In this sense, it is important to begin to understand the legal terms surrounding the use of AI and to complement this with a holistic strategy that allows its proper use by companies and businesses.
About the author:
Armando Guio Espanol, Associate
Armando specializes in corporate and transactional matters, with a specialty in data protection, technology matters and AI-assisted work. He has an LLM from Harvard Law School, and a Masters degree from Oxford University. He has also worked as a research fellow at the Berkman Klein Center at Harvard University, collaborating with the Ethics and Governance of AI initiative. He has also advised the government of Colombia in the design of its National AI Strategy and compliance with OECD standards on this matter.
What will the law firm of the future look like?
Time travellers from 100 years ago would find the modern law firm a curious beast: On the one hand, litigators spend their days preparing pleadings and giving oral argument in court; transactional lawyers spend their days preparing and reviewing contracts, deeds and agreements. Lawyers in 2023 do pretty much the same thing. On the other hand, the way in which that work is carried out is radically different. Law firms are increasingly diverse, and no longer the sole province of the most privileged in society. And instead of doing everything by hand, or dictating to clerical staff, lawyers are expected to be adept with a range of different tech tools.
Sterlington Welcomes Todd McClelland as Partner in Data Privacy, AI and Litigation practices
Sterlington, an international law firm, is pleased to announce the appointment of Todd McClelland as a Partner in Sterlington’s Cybersecurity, Data Privacy, AI and Litigation practices. As a recognized leader in cybersecurity and data privacy, Todd’s expertise will strengthen Sterlington’s capabilities in these strategic areas of focus for the firm.
What Is an NDA? Everything About Outsourcing NDAs
Interested in learning more about non-disclosure agreements? Explore our comprehensive guide to NDAs and why outsourcing your NDAs is the ideal option