The poor quality of information used to create Artificial Intelligence (AI) tools, mistakes within the system leading to poor results, and breaches of sensitive personal data are primary threats associated with the use of AI by pharmaceutical and biotechnology companies developing new drug therapies, according to lawyers from Baker McKenzie. The most important regulation for European companies using artificial intelligence will be the AI Act.
The commencement of the AI Act regulations is phased, and the Polish regulator is currently preparing to implement it into local law. AI Act and regulatory institution recommendations are based on a risk assessment scale and recommend the implementation of adequate mechanisms to protect patients’ rights. The riskiest solutions are burdened with the greatest obligations concerning transparency, internal control, results verification, and accountability for mistakes.
“We advise our clients, who are currently implementing AI-based technologies, to adopt a similar approach. The intention of using AI-based technology should be preceded by an evaluation of potential risks from both the technological and legal perspectives,” says Martyna Czapska, a senior associate in the IP Tech team at Baker McKenzie. “For example, the use of popular chatbots by an organization possessing sensitive data should be carried out with deliberation and confidentiality rules.”
The World Health Organization (WHO) and the European Medicines Agency (EMA) have issued their recommendations or guidance documents on the use of AI in the health sector and research into new drug therapies. They also point out risks associated with these technologies and recommend caution in their use. Baker McKenzie experts note that the IT industry, driven by competition and aiming for the fastest possible commercialization, does not fit into the world of medicine.
“The EMA very clearly emphasizes in its statement that the development and implementation of artificial intelligence in the field of research into new drugs should be guided by a human-centric approach,” says Juliusz Krzyżanowski, who leads the healthcare team in Baker McKenzie’s Warsaw office. “While Silicon Valley can operate under the “move fast and break things” philosophy, biotech and pharma companies cannot afford to take such an approach.”
A crucial issue for every researcher and entrepreneur using AI-based tools is that the responsibility for any infringements of third-party rights, confidentiality, or consequences of AI model errors may, in many cases, rest with the user. Therefore, any due diligence enquiry preceding the use of an AI tool should include an evaluation of the sources and quality of data used to train the model. Incomplete, unreliable, biased data, or data infringing third-party rights can lead to unreliable results and generate legal risks.
The entrepreneur should ensure appropriate clauses in the contract – guarantees and assurances from the supplier regarding the rights to use data for training the tool, and responsibility for its operation. If an AI-based system is provided with sensitive patient information, there is a risk of unauthorized use or disclosure of confidential information, and sometimes personal data. It is necessary to determine whether the tool provider will have the right to process this data and feed it into their algorithm, and whether transferring it outside of the company is necessary and compliant with the law.
“Issues resulting from the GDPR, such as the legal basis for data processing, fulfillment of informational obligations, minimizing data collection and processing to the level necessary to achieve the purpose of processing, and the right to object, also apply to AI-based tools,” adds Martyna Czapska.
Furthermore, in the context of copyright law is the issue of the work product of the model. According to the current predominant approach, a work generated by artificial intelligence is not protected by copyright law. Such rights can only belong to a human being. Therefore, the product of AI work should not be treated as a finished creation, but as an inspiration for further work. The general recommendation for using AI tools to create new technologies and medical therapies is to involve researchers and maintain control at every stage of model creation – from its training, through ensuring the quality of data, to result verification.
Source: https://ceo.com.pl/sztuczna-inteligencja-w-medycynie-baker-mckenzie-ostrzega-przed-ryzykiem-dla-danych-i-odpowiedzialnoscia-firm-farmaceutycznych-56953