- PatentNew
- Trademark
- Innovation
- Solutions
- Contact
- Learn & Support
- Learn and support
- Resource HubAccess value added content to support your IP strategy
- Webinars & EventsAre you interested in attending one of our online or onsite event?
- Product TrainingsCustomer success is our priority. Increase your skills in the use of Questel’s software
- Product NewsA platform dedicated to software and platforms news and evolutions
- Best-in-class Customer ExperienceOur goal is to exceed our clients' expectations and share best practices
- IP TrainingIncrease the IP-IQ of your entire organization with engaging IP training programs
- Resource Hub
- Newsletter subscriptionSign up for our quarterly patent and trademark newsletters and set your email preferences below.
- Newsletter subscription
- About Questel
- Learn & Support
- Learn and support
- Resource HubAccess value added content to support your IP strategy
- Webinars & EventsAre you interested in attending one of our online or onsite event?
- Product TrainingsCustomer success is our priority. Increase your skills in the use of Questel’s software
- Product NewsA platform dedicated to software and platforms news and evolutions
- Best-in-class Customer ExperienceOur goal is to exceed our clients' expectations and share best practices
- IP TrainingIncrease the IP-IQ of your entire organization with engaging IP training programs
- Resource Hub
- Newsletter subscriptionSign up for our quarterly patent and trademark newsletters and set your email preferences below.
- Newsletter subscription
- About Questel
Can AI Own Itself?
Artificial intelligence (AI) has evolved from a software concept into an active presence in our lives. We use it to manage our power grids, analyze medical data and keep planes in the air (just to name a few examples). In general, AI solves problems by performing automated analyses of data based on programmer-supplied algorithms. Often it also incorporates machine learning, where the program trains itself to become “smarter.” No longer is AI just an attempt to replicate human intelligence — it can have a mind of its own — and it can be different.
In recent years, patent offices around the world have seen a surge of AI-related patent filings. These AI patent applications have challenged our existing standards for patent eligibility, and they raise an important question about intellectual property (IP): When machine learning takes AI beyond its human programmers’ contributions, can AI own itself? The answer to this question is tricky because it will have to encompass not only who has a right to profit from AI, but also who’s responsible for insufficient or even damaging outcomes.
A word from WIPO
Last year, World Intellectual Property Organization (WIPO) General Director Francis Gurry said, “The fundamental goals of the IP system have always been to encourage new technologies and creative works, and to create a sustainable economic basis for invention and creation. From a purely economic perspective, if we set aside other aims of the IP system, such as “just reward” and moral rights, there is no reason why we shouldn’t use IP to reward AI-generated inventions or creations.” However, Gurry admitted that “this still requires some thought” and that “the answers are not clear.”
The current thinking on AI ownership
U.S. courts have been clear that machines are not individuals and thus can not own property or be held liable. Indeed, the general worldwide consensus at the moment is that AI belongs to its human programmer or programmers. There have been numerous test cases to back up this consensus. Artworks, for example, “created” by AI so far rely predominantly on the continual tweaking of algorithms by humans to achieve the final result. In such cases, AI indeed seems to be merely a tool employed by a human.
Still, questions remain. For example, much of what AI does is analyze massive data sets. This raises the question of whether the owners of such data are entitled to IP rights for inventions that used their data. The Internet of Things poses another question along the same lines: Who owns a program whose functionality depends on interaction with other proprietary devices or programs? There is no simple answer, and it gets even more complicated.
AI often incorporates chunks of publicly donated open source code. Should mechanisms exist that allow the contributors of such code to participate in earnings? A case in point is an AI-created painting that recently sold for half a million dollars. It was based in part on open source code written and uploaded by programmer Robbie Barrat, who asked in a tweet, “Am I crazy for thinking that they really just used my network and are selling results?” Making things even more complex is that fact that open source code may contain contributions from many programmers — so even definitively identifying its authors can be difficult, much less compensating them.
AI’s “black box” problem
So far, we’ve been discussing AI in general, but AI that involves machine learning is a whole other story, and one in which AI self-ownership may be more sensibly justified. This type of program can evolve well beyond its human input, becoming a “black box” whose workings are often largely unknown even to its original human “creators.” This raises its own particular set of IP questions: Does a human even want to own the IP of AI for which the decision-making process is unknown? Who would be responsible for an AI program that malfunctions or causes damage? There’s an additional IP wrinkle here, too: Who is responsible for AI that teaches itself to infringe on someone else’s patent?
While machine learning has proven to be a productive tool, there’s no question that it’s a bit unnerving and potentially dangerous in general. Without knowing how a program is actually deriving its conclusions — even if they seem to be more or less sensible — programmers are rightfully uneasy. Hanna Wallach, senior researcher at Microsoft, tells Quartz, “As machine learning becomes more prevalent in society — and the stakes keep getting higher and higher — people are beginning to realize that we can’t treat these systems as infallible and impartial black boxes. We need to understand what’s going on inside them and how they are being used.”
A radical rethink of IP and AI may eventually be necessary
The black box nature of machine learning AI, the complications of cross-application functionality, the difficulty of assessing genuine ownership of an AI program — all of these leave us to believe that current IP frameworks fall short when it comes to AI ownership. At the same time, WIPO Director Gurry believes, “The IP system as we know it is certainly not going out of fashion. It is being used more than ever before. But new challenges are emerging and the result may be an additional layer of IP, rather than the replacement of the existing system.” Only time will tell.