AI: Decoding IP – WIPO Conference 2019 (Part Two)

D9UuoorX4AI_-ZM

On 18th June 2019 the UKIPO and WIPO hosted ‘AI: Decoding IP’: a conference dedicated to the AI zeitgeist in the intellectual property industry.

Part One features the introductory comments from Lord Kitchin and Francis Gurry of WIPO, followed by the panel on new business models, and how AI is disrupting IP.

This post, Part two, features the afternoon sessions on ownership, entitlement and liability (featuring Prof. Lionel Bently, Dr Eleanora Rosati, Prof. Tanya Aplin and others) and the sesison on ethics and public perception (Dr Christopher Markou and Dr John Machtynger of Microsoft).

Ownership, Entitlement and Liability

The afternoon is promised to be a “contentious” session. Dr Noam Shemtov (Queen Mary University) is first to speak. His talk is on Inventorship.

Inventors have two sets of rights under patent law. Substantive rights to ownership, and moral rights to attribution. The EUIPO has produced a study with three key questions:

  1. Can AI be designated as an inventor?
  2. Should AI be designated as an inventor?
  3. Who is the inevntor of an invention involving AI activity?

Inventorship to a certain extent is just the first step in an entitlement enquiry. We already skip this step when consdering employed inventors. Currently the answer to question 1 is No – an AI cannot be designated as an inventor.

Regarding question 2, what is the purpose of attribution? Informing the public of the inventor’s involvement – this enables the public to give her/him kudos, it also allows reputational recognition which still acts as a powerful incentive, studies show [Ed: anyone who has been involved in an entitlement dispute will know that attribution can weigh just as heavily as financial interest].

These considerations don’t apply to AI [Ed: do they not? naming an AI system on patent applications surely increases its reputation and potentially its value – this is relevant when a company licences its software to third parties, perhaps it would benefit from a faculty for its system to be named on the application’s bibliographic data?]

Dr Shemtov considers that there is always a human intellect with “intellectual domination”: either the owner of the system or the user/designer of the system. It is the second of these that is the real inventor.

Prof Eleanora Rosati wants to consider copyright considerations arising before creation starts. This is the issue of text and data mining. This falls under the broad consideration of liability.

These techniques allow collection of useful information from large and otherwise unhelpful datasets. Uses include the scanning of social media for reactions to films or adverts; algorithmic assessments of clothing trends.

The next step is creating data – an example is the recent google doodle which allowed users to make new music in the style of Bach using a tool that had been fed his music. Ai has also been used to write (bad) poems that were nonetheless good enough to fool a human panel.

With regards access to content further IP issues arise: copyright; contractual restrictions; database rights. We now have mandatory exceptions at the EU level. It is now apparent that text and data mining requires a licence unless you fall within an exception. 

Prof. Tanya Aplin focussed on AI created copyright and ownership. There are many stories about AI generated artworks, music etc. We have an anthropocentric view on copyright law. The key requirements is originality. In the EU was have the author’s own intellectual creation, which seems restricted to that from natural persons. 

Throughtout copyright law there is a focus on human creators. This can be explained by historical knowledge, and insight from the natural rights justification. Creative mental labour has been positioned as a uniquely human thought. Whether machines can emulate these capabilities is not a topic that can be resolved entirely in intellectual property. Concerns cut across intellectual propery law to other areas such as tort and even crime. 

We should be wary of the idea of copyright incentives for AI works We have incentives for the software producing these works, as well as measures such as trade secrets. prof Aplin says it is preferable not to provide further protection for AI generated works.

Prof Lionel Bently wants to discuss ownership. He says countries are obliged to protect sound recordings and broadcast has no threshold requirement for human authorship. There is no basis on which countries can avoid offering protection for works in those forms. For works of authorship there are different considerations. 

Prof Bently considers the computer generated works provisions of the 1988 Act. These were copies into a large variety of countries. This is a broad concept. 

A computer generated work expires after 50 years of its making; it excludes computer generated rights; it attributes a notion of authorship. Have we somehow anticipated the current issues? Some people think so, however prof bently suggests that the effect of EU law is that computer generated works provisions of the CDPA 1988 are not compliant.

The AG in EC-145/10 EvaMaria Painer v Standard Verlags GmbH considered copyright not to subsist in computer generated works. Prof bently also finds it implausible that Ai can add the necessary creative touch. 

Prof bently mentions the B-word. He says the UK could deviate from the EU position after that date. There will also be room for clarification of the provisions of EU law by the UK’s Supreme Court.

Dr Belinda Isaac wants to address issues about how AI should be regulated. How do we assign moral and legal responsibility? Some believe government should be hands off to encourage innovation; the market will provide.

There are issues however that suggest that some changes might be necessary. One example comes from the recent autonomous vehicle crash in the US. What about the intervnetion of the Boeing anti-stall software that caused recent accidents?

Everything from medical scanning to home vacuums will be using AI systems. Do we want generic AI legislation, or sector specific intervention?

We need legal certainty. Dr Isaac says that absent regulation companies cannot be trusted to act in the public’s best interests: just see the social media companies.

Final panel: Ethics and AI

Dr John Machtynger of Microsoft puts forward six pervasive ethical principles that are in action at Microsoft. He says that AI design is focussed on trust.

Those principles are fairness, reliability, privacy, inclusiveness, transparency and accountability.

Microsoft has a principle of desing thinking for inclusivity. There is a benefit in starting with inclusive assumptions.

Transparency is important to allow work to stand the test of time. People need to be able to look back and see how things work. Data scientisits are however juggling the need for accuracy and transparency.

Microsoft doesn’t want everyone to adopt its system, but it thinks others should adopt a system of some kind so that they behave consistently.

China released the Beijing IP principles last week. They weren’t saying anything that different from what our Western focussed values have provided.

As the technology develops these principles will be more stretched. New questions will arise about the balance between innovation and regulation. 

Dr Machtynger warns that the current IP system will constrain innovation. He says that currently respect for IP is not a principle that Microsoft has advanced. 

Dr Chris Markou was there to finish the day. He says that all inventions challenge existing legal paradigms. From the camera to the railroad. We always feel we face something unprecedented and feel the need to start again from scratch. Are systems are fairly robust.

Ethics matters in a number of ways. This conversation on AI has taken on a life of its own. The AI industry has proposed AI as a problem to be solved by application of ethics.

Application of creativity is a temendously hard problem. Lingering concern is not a lack of respect for AI created rights, but the end of creative activity by humans, who are replaced much as computers have created Go or Chess. This might happen because AI could generate infinite sequences of song or music that no human artist could later avoid accidentally reproducing & thereby infringing.

We therefore need to be careful that we don’t cut off avenues of human expression that are valuable in and of themselves, in a rush to incentivise the creation of content for enjoyment.

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *