Artificial Intelligence law in Bangladesh:
Recent developments have fueled speculation that some artificial intelligence (AI) systems have attained’sentience.’ Sentient AI systems, to paraphrase philosopher Nick Bostrom, are those that can experience ‘qualia,’ which includes feelings, sensations, and thoughts. This claim is being challenged, but the news has left a trail of excitement in its wake.
Artificial intelligence (AI) generated content has posed significant challenges to the current Intellectual Property (IP) regime. It is still unclear how far the current IP system, which is predominately based on consequentialist and/or utilitarian approaches, is prepared to accommodate AI generated contents. Furthermore, when it comes to developing countries like Bangladesh with its still premature Judicial system and Artificial Intelligence law, there are numerous ethical and legal issues concerning AI, such as the lack of a regulatory regime, data misuse, bias, and discrimination.
Contents patentable, copyrightable, or designable:
Perhaps the most striking issue at hand is that, because AI creates or invents content based on provided data, IP protection would transform the proprietary right on data, potentially violating data protection laws. Again, one cannot deny the implications of AIs in their current state. As a result, there have been heated legal and policy debates on a number of unresolved issues: To what extent and magnitude might AI-generated contents be patentable, copyrightable, or designable, and if so, who would be the inventor, author, or designer, respectively?
What are the implications of AI-generated content if AI is granted legal personality, with rights and duties?
What are the available legal and policy options for dealing with this new technology?
According to the Hegelian approach to philosophy, the inventor or creator has a legitimate justification to enjoy the results and benefits of such property due to the connection between the work and the person.
The AI or so called sentient computers, on the other hand, cannot be considered the beneficiaries of their labor. Similarly, the ex-ante justification of inventiveness is inapplicable to AI generated content. The romanticism semantic of a “lone genius inventor or creator who invents or creates only if strongly incentivized” does not appear to be well suited to justifying the protection of AI generated contents.
Computers do not create or invent content on their own initiative; rather, they are directed to do so. Furthermore, because computers are value-neutral, AI may produce socially or culturally unacceptable or immoral content, which may contradict the proposition of’social planning theory.’ There may be concerns based on the ‘free-riding’ doctrine, which states that if AI generated contents are not protected, they will be open to copying and undue benefits may be taken by others, which will contradict the deontological justification of the IP regime, just like any other branch of law.
Much has been written about the desirability of such sentience, with debate centered on topics like how sentient AI adds value to society and its role in shaping our understanding of consciousness. Commentators have also attempted to theorize the tenets of responsible sentience by articulating the dangers of such systems.
Legal Risks associated with Sentient systems and Artificial Intelligence law:
Individual privacy is one such risk. Sentient systems, in theory, could act as a patient listener capable of roving conversations with customers. This characterization of such systems animates their interaction with privacy law and necessitates reflection.
While such systems have a wide range of applications, one that has recently received attention is AI-enabled chatbots. This use-case hints at how sentience might be used in the future to make human-machine interactions more personable and meaningful.
These interactions, of course, include personal information. As a result, they are subject to the application of privacy law. However, in the future, coding such systems with sentience makes the operation of such law circumspect and susceptible to disruption. Sentient systems, as opposed to the average bot, can engage deeply with their interlocutors without the need for human intervention.
Prompts to share deeply held beliefs, health information, or financial data are examples of such engagement. Prompts may also encourage people to talk about related people, such as friends or family. This likely aftereffect of sentience may thwart the strict application of privacy law.
Consent dilemma with AI law in Bangladesh:
The dilemma of consent is at the heart of such frustration. Sentient AI systems are likely to alter conversational patterns, undermining privacy law’s notice-and-consent provisions. In India, for example, the Information Technology Act of 2000 requires entities collecting sensitive personal information — such as financial information, medical history, and sexual orientation — to obtain prior consent before collecting such information.
Entities must also explain to customers why such information is being collected. This purpose effectively limits an entity’s data processing activity. The above-mentioned rules require that data collection be limited to the stated purpose or other lawful purposes related to the entity’s functions.
However, communicating a strong, well-defined purpose to users will be difficult for entities deploying sentient AI. The AI’s novel or meandering conversation patterns may introduce new themes for conversations, rendering consent moot. As a result, convoluted consent tokens and ambiguous purpose statements may dominate the machine-human relationship, causing anxiety in both customers and businesses.
Sentient AI, which is infinitely curious, can create situations in which businesses and regulators must respond with unwavering vigilance. As a result, their consent-and-purpose-bending experiment with privacy law necessitates well-thought-out solutions.
The fundamental right to privacy is not expressly granted in the Constitution. The courts, on the other hand, have incorporated the right to privacy into the following existing fundamental rights:
Article 39 guarantees freedom of thought and conscience; Article 32 guarantees life and personal liberty.
These fundamental rights under the Constitution, however, are subject to reasonable restrictions imposed by the State under Article 39(2) of the Constitution.
According to Article 43 of the Constitution, every citizen has the right to the privacy of their correspondence and other means of communication, subject to any reasonable restrictions imposed by law in the interests of the security of the State, public order, public morality, or public health.
Furthermore, the Constitution states that no one shall be deprived of life or personal liberty except in accordance with legal procedures. As a result, judicial intervention is very much possible in Bangladesh’s legal system, and such privacy is subject to the application of lawful interception.
The Technology Act and the Digital Security Act address issues such as wrongful disclosure, personal data misuse, and breach of contractual terms relating to personal data.
The Act on Technology
The Technology Act provides legal recognition for electronic certificates and transactions conducted through electronic data interchange, as well as other forms of electronic communication that involve the use of alternative or paper-based methods of communication and information storage to facilitate electronic filing of documents with government agencies.
The Technology Act imposes liability on any person or entity that possesses, deals with, or handles any sensitive personal data or information. Furthermore, the Technology Act requires the implementation and maintenance of reasonable security practices to avoid wrongful loss or gain by the owner of such data, as detailed below.
The Government of Bangladesh (‘the Government’) has the authority to intercept data under certain conditions under the Technology Act. Section 46 of the Technology Act, in particular, which is an exception to the general rule for maintaining information privacy and secrecy, provides that the government may intercept data if it is satisfied that such interception is necessary in the interest of:
- the state’s sovereignty, integrity, or security;
- friendly relations with foreign states;
- public order; preventing incitement to commit any cognizable offence relating to the above; or investigating any offence.
The Government may, by order, direct any agency of the appropriate government authority to intercept, monitor, or decrypt, or cause any information generated, transmitted, received, or stored in any computer resource to be intercepted, monitored, or decrypted.
Section 46 of the Technology Act gives the government the authority to intercept, monitor, or decrypt any information, including personal information, in any computer resource. The government may require disclosure of information if it is of such a nature that it should be disclosed in the public interest. This category may include information about anti-national activities that violate national security, violations of the law or statutory duty, or fraud.
Under the aforementioned conditions, the government-appointed controller can direct a subscriber to extend facilities to decrypt, intercept, and monitor information. Section 69 of the Technology Act covers interception, monitoring, and decryption for the purpose of investigating cybercrime. The controller may declare any computer, computer system, or computer network to be a protected system and authorize applicable persons to secure access to protected systems by publishing a notice in the Bangladesh Government Press or in the electronic gazette.
The Digital Security Act and Artificial Intelligence law
The Digital Security Act was passed to ensure national data security and to create laws governing data crime detection, prevention, suppression, prosecution, and other related issues. The relevant provisions of the Digital Security Act are listed below.
According to the Digital Security Act, if any data or information published or propagated in digital media about a subject under the Director General’s purview threatens data security, the Director General may request that the relevant regulatory authority remove or block said data or information as appropriate.
The Telecommunications Act of 2001
The Telecommunications Act of 2001 (‘the Telecom Act’) is the only law that governs two-party electronic communication. According to Section 67(b) of the Telecom Act, no one shall intercept any radio communication or telecommunication, nor shall any intercepted communication be used or divulged, unless the originator of the communication or the person to whom the originator intends to send it has consented to or approved the interception or divulgence. Such an act is punishable by imprisonment for a maximum of three years or a fine of BDT 300,000 (approx. €3,153), or both.
Under Section 97(Ka) of the Telecom Act, the Government may empower certain authorities (e.g., intelligence agencies, national security agencies, investigation agencies, or any officer of any law enforcement agency) to suspend or prohibit the transmission of any data or voice call, as well as record or collect user information relating to any subscriber to a telecommunications service, on the grounds of national security and public order.
This broadly drafted provision includes intercept capabilities. The relevant telecoms operator must fully support the empowered authority in exercising such powers. The Telecom Act makes no mention of time limits on these powers. As a result, an interception may last as long as the agency carrying out the interception desires.
The Government may require a telecommunications operator to keep records relating to a specific user’s communications under the broad powers granted in Section 97(Ka) of the Telecom Act on the grounds of national security and public order. However, when deciding whether to grant a retention request, the relevant government agency must consider the operator’s technical resources and ability to retain information.
Under Section 96 of the Telecom Act, the government may seize any telecommunication system and all arrangements necessary to operate it in the public interest. It may retain such possession for any period of time and keep the operator and their employees employed full-time or for a specific period of time for the purpose of operating such apparatus or system. However, the government is required to compensate the owner or person in control of the radio apparatus or telecommunications system that it takes over.
Except for authorised persons as defined in Section 97(Ka) of the Telecom Act (security agencies), anyone who taps or intercepts telecommunication between two persons without their permission commits an offense.
According to Section 68 of the Telecom Act, the following acts are considered offenses if committed by an official of a licensee while performing their duties: use any telecommunication or radio apparatus with the intent of obtaining any information relating to the sender or addressee, or the content of a message sent by telecommunication or radio communication, unless the Bangladesh Telecommunication Regulatory Commission (‘BTRC’) has authorized that employee or operator to receive the message; except as required by the BTRC or a court, disclose any information
The 1872 Contract Act
The Contract Act of 1872 can be used to address the issue of data protection, which has traditionally been governed by the contractual relationship between parties. Parties are free to enter into contracts to define their relationship in terms of personal data, personal sensitive data, data that may not be transferred out of or into Bangladesh, and the manner in which the same is handled.
The 2009 Consumer Rights Protection Act
According to Section 52 of the Consumers’ Rights Protection Act, 2009, anyone who violates any prohibition under any law currently in force by doing any act that is detrimental to the life or security of a service receiver is punishable by imprisonment for a period not exceeding three years and/or a fine not exceeding BDT 200,000 (approx. €2,070). According to Section 53, any service provider who, through negligence, irresponsibility, or carelessness, harms or kills the service receiver’s finances or health, or causes death, is subject to imprisonment for a period not exceeding three years and/or a fine not exceeding BDT 200,000 (approx. €1,980). Furthermore, the consumer may be entitled to compensation.
These provisions implicitly impose responsibility on the person or entity that possesses, deals with, or handles any sensitive personal data or information for the consumer to implement and maintain reasonable security practices in order to avoid wrongful loss or gain to the owner of such data.
The Criminal Code
The Penal Code of 1860 (‘the Penal Code’) can be used to effectively prevent data theft. The Penal Code punishes misappropriation of property, theft, and criminal breach of trust with imprisonment and a fine. Although the Penal Code only applies to movable property, it has been defined to include corporeal property of “every description,” except land and things permanently attached to the earth. As a result of their movability, computer databases can be protected under the Penal Code.
The 2000 Copyright Act
The Copyright Act of 2000 (the “Copyright Act”) safeguards intellectual property rights in literary, dramatic, musical, artistic, and cinematographic works. The term “literary work” also includes computer databases. As a result, copying a computer database or copying and distributing a database constitutes copyright infringement, for which civil and criminal remedies are available. However, the Copyright Act makes it difficult to distinguish between data protection and database protection. Data protection is concerned with protecting individuals’ informational privacy, whereas database protection is concerned with protecting the creativity and investment put into the compilation, verification, and presentation of databases.
The Data Protection Act Legislation
The Government intends to submit the Data Protection Bill (‘the Bill’) to the National Parliament for enactment, and in that regard, an internal draft of the Data Protection Act was circulated in November 2020. While the content of the law has not been made public, there have been a number of indications from the government about the new dimensions that the Bill will introduce.
The Bill is intended to define data controllers (as opposed to data users) as individuals who collect, process, use, share, or otherwise process data within Bangladesh or data of Bangladesh residents. It has been reported that it will cover certain aspects of the General Data Protection Regulation (Regulation (EU) 2016/679) (‘GDPR’), specifically the data quality principle, use limitation principle, and security safeguards principle, as opposed to the collection limitation principle and accountability principle, which are addressed to some extent by the Digital Security Act.
Another new requirement is to push for data localisation, or data sovereignty, as the Bill states that Bangladeshi citizens’ personal data must remain in the country. The Bill specifically states that every data controller must keep at least one serving copy of such data within the geographical boundaries of Bangladesh.
The public sector and Artificial Intelligence law
There is no separate law governing this matter. However, under the Digital Security Act, anyone who commits or aids and abets in committing an offence via computer, digital device, computer network, digital network, or any other digital medium will face a term of imprisonment of up to 14 years or a fine of up to BDT 2.5 million.
The issues raised foreshadow a sliver of the regulatory scrutiny that sentient systems will face. Addressing this gaze necessitates the adoption of two entity-level attitudes. To begin, because compliance is trite, entities may consider investing in processing techniques that maintain data-light sentient systems.
Second, entities must recognize that privacy is not synonymous with privacy law. Privacy is an interdisciplinary goal; organizations must empower their engineers to determine its technical boundaries.
Entities deploying such systems may, for starters, articulate a processing pipeline for them. This will be done in accordance with their privacy policies. The pipeline must include the following information: the system’s role, the location of its servers, the analytics and third-party tracking tools that the system may use, and the risks that its data processing activities may cause.
Concurrently, businesses must intensify efforts to recognize the wide range of functions that their sentient system may perform. This data can be used to set hard limits on data processing. They can also be used to identify safe harbor use cases; for example, sentient systems processing data to revive languages may be exempt from certain privacy law provisions.
Transparency and law regards to Artificial Intelligence:
The common thread running through these recommendations is transparency. Sentient systems’ commitment to openness is likely to serve as an antidote to the concern that they’monitor’ individuals by collecting personal data. Adopting a framework that operationalises openness and fairness in personal data processing may assist entities in effectively navigating privacy law.
The 2030 Agenda was incorporated into Bangladesh’s seventh Fiscal Year Plan (2016-2020). This is an excellent opportunity to carry out the 2030 agenda while reflecting the needs of the SDGs in the national plan. To advance, the Bangladesh government, NGOs, philanthropists, tech companies, and organizations that collect or generate large amounts of data will need to take decisive action. There are two major issues that must be addressed: data accessibility and a scarcity of talent capable of improving AI capabilities, improving models, and implementing solutions.
AI can play critical roles in addressing the challenges of the SDGs. McKinsey Global Institute has identified approximately 160 SDG cases where AI can be used to solve problems. Bangladesh is committed to using emerging Artificial Intelligence to solve the most pressing SDG problem.
To comprehend the upcoming challenges of artificial intelligence, an ideal procedure for applying AI in various sectors is required. We have identified seven national priority sectors.
Public service delivery, manufacturing, agriculture, smart mobility and transportation, skill & education, finance & trade, and health are among them. We have identified scopes and recommended actions for each of the sectors. Taking into account all of the recommendations from various sectors and challenges, we identified six strategic pillars for AI in Bangladesh and developed a development roadmap for the pillars in order to establish a sustainable AI Ecosystem and Artificial Intelligence law in the country.
Bangladesh’s six strategic Artificial Intelligence pillars are as follows:
- i) research and development,
- ii) AI workforce skilling and reskilling,
- iii) data and digital infrastructure,
- iv) ethics, data privacy, security, and regulations,
- v) funding and accelerating AI start-ups, and
- vi) industrialisation for AI technologies. Aside from a strategic brief, each strategy includes a road map, action plan, related stakeholders, and lead ministries.
Anyone can see the broader strategy steps planned for Bangladesh over the next five years in that summary roadmap given by the goverment. Then the country can consider our current readiness in terms of infrastructure, awareness, resource pool, social and legal challenges, and other pertinent issues when developing the road map.
Deefakes and Artificial Intelligence law in Bangladesh:
Deepfakes are fake media in which a person’s likeness in an existing image or video is replaced with someone else’s. While the act of faking content is not new, deepfakes use powerful machine learning and artificial intelligence techniques to manipulate or generate visually and audibly deceptive content. Deep learning is used to create deepfakes, and the main machine learning methods involve training generative neural network architectures such as autoencoders or Generative Adversarial Networks (GANs).
Deepfakes have received widespread attention for their use in the creation of child sexual abuse content, celebrity pornographic video, revenge porn, fake news, hoaxes, bullying, and financial fraud. This has prompted both industry and government to respond by detecting and limiting their use.
How does deep fake in Artificial Intelligence work?
AI technologies are used to create a deepfake. A program is taught to replace or synthesize faces, speech, and emotions. It is used to imitate an action that a person did not commit.
As a result, it is clear that Deepfake is beneficial to the media and film industries. It can be a great tool for creating content and making films. However, it is used to create pornography, financial fraud, fake news, fake videos, bullying, and so on. It is obvious that this is a technology with few advantages and many disadvantages. It causes a slew of major issues for humanity.
Law does not evolve at the same rate as technology. Nowadays, technology is rapidly evolving.
However, the development of law is extremely slow. As a result, technology easily wins the race. In Bangladesh, there is no specific law that deals with deepfake-related crimes. However, we can use existing laws to help us prevent this type of crime. I’ll go over these laws later.
Copyright infringement and Artificial Intelligence law:
World Intellectual Property Organization (WIPO) draft issue paper on AI and Intellectual Property. However, copyright alone cannot prevent deep fake. Because the victims are not the owners of these videos and photographs. Section 72 of the Bangladesh Copyright Act-2000 defines certain acts that do not violate copyright. This is a lengthy list. Unfortunately, the creator of the deepfake video or images has copyright protection. Victims is not protected by copyright. As a result, the victim has no legal recourse against the creator of the deepfake video and images. As a result, it is clear that using Copyright Law to take legal action against the wrongdoer is extremely difficult.
Law Against Defamation and Artificial Intelligence law:
Another avenue for legal action against deepfake crime is through a defamation case. However, in Bangladesh, defamation is used incorrectly.
According to Section 499 of The Penal Code 1860, “Whoever, by words either spoken or intended to be read, or by signs or by visible representations, makes or publishes any imputation concerning any person intending to harm, or knowing or having reason to believe that such imputation will harm, the reputation of such person, is said to defame that person, except in the cases hereinafter excepted.”
Section 500 of the Penal Code 1860 states that “whoever defames another shall be punished with simple imprisonment for a term which may extend to two years, or with fine, or with both.”
However, in Bangladesh, the majority of defamation cases are filed solely for harassment. As a result, the dismissal rate of defamation cases in lower courts is very low. However, under Section 198 of the Criminal Procedure Act of 1898, any aggrieved person may file a case of defamation.
Another law exists to prevent the misuse of defamation-related crime. That is the 2018 Digital Security Act. However, this law will not prevent the abuse of the defamation crime. As a result, we must strengthen our defamation laws.
Data protection legislation and Constitution in Bangladesh:
We have no data privacy or data protection laws, which is a harsh reality. Every person has the right to privacy under Article 33(b) of our Constitution. If this right is violated, he or she may file a case in the High Court Division pursuant to Article 102. (1).
According to Section 7 of the Right to Information Act of 2009, everyone has the right to keep his or her data safe. Nobody is going to publish his data. Anyone or any authority has no right to access his data. The provisions of The Digital Security Act 2018 can be used to prevent the misuse of personal data. However, these provisions are insufficient.
Pornography Regulations Act and Digital Security in Bangladesh:
The majority of Deepfake’s videos are pornography or revenge porn. The majority of the victims in this case are women. The Pornography Control Act of 2012 can be of assistance. Section 8 of the Pornography Control Act of 2012 severely restricts pornography with a wide range of penalties. Section 8 (1) makes any act capturing video or still pictures of sexual intercourse or behavior exposing sexual sensation, with or without consent of parties in sexual interaction, punishable by imprisonment for a maximum of 8 years and a fine of 2 lac taka.
Making pornographic videos with minors is a major offense punishable by ten years in prison and a fine of five lac taka under Section 8(6).
AI is not only producing patentable products, but it has also begun producing potentially copyrightable works such as newspaper articles, songs, poems, and books, which are obviously creative and artistic in nature. For example, the Flow Machine developed by a team of Sony researchers can compose music; another machine, Mubert, which has been dubbed “the world’s first online music composer,” can continuously produce music in real time.
This leads us to the obvious questions: Is the creation of AI protected by intellectual property? Who owns the copyright to such a creation? Is it a breach of IPR ethics? These are increasingly important questions these days. There are numerous debates about who created AI and who owns it.
It should be noted that, as an extension of the Berne Convention (1971), only computer programs and data compilations have been protected as copyrightable works under Articles 4 and 5 of the WIPO Copyright Treaty and Articles 10(1) and 10(2) of Trade Related Aspects of Intellectual Property Rights (TRIPs). There is no mention of AI protection in these treaties.
In this regard, an intriguing example can be found in Naruto’s case (Naruto v. David John Slater et al, No. 3:2015-cv-04324,9th Cir. ), in which a monkey took a’selfie.’ When a photographer complained about the monkey’s’selfie,’ the US Copyright Office stated that “the Copyright Office would not register works produced by animals or machines.”
It even went on to say, “To qualify as a work of ‘authorship,’ a work must be created by a human being,” which was not previously mentioned in the copyright law, and the term ‘author’ was never defined in this law. In contrast, the European Union (EU) has proposed in a draft paper that robots powered by AI be given a “special legal status.”
According to the paper, such a robot must abide by basic ‘civil laws.’ The EU’s interpretation is somewhat acceptable, but the US’ refusal to grant copyright for non-human creation raises additional questions, such as who would own the rights to an AI creation. Some articles argued that these issues could be resolved through agency law or that the person in charge of the AI should be granted copyright. Other arguments suggest that the issue of co-authorship be considered whenever an AI creation is involved.
However, many countries’ laws are deficient in terms of AI, so excluding AI from copyright law is not the ultimate solution. This is not the way to approach AI development. To address this difficult issue, more global attention and consensus are required.
WIPO Copyright Treaty and the TRIPs agreements:
According to the WIPO Copyright Treaty and the TRIPs agreement, there is currently no clear definition or mention of protection in international treaties. The WIPO Worldwide Symposium on Intellectual Property Aspects of AI, held at Stanford University from March 25 to 27, 1991, was strangely silent on many important issues.
A careful reading of the symposium paper reveals that it was more concerned with defining AI than with finding a way forward to address the issues raised by IP rights. In fact, current laws, both at the national and international levels, are inadequate to address this issue.
Some attendees at the symposium argued that because software is protected by copyright laws, AI work should be treated similarly. However, if humans claim ownership of an AI creation, they must also accept responsibility for AI infringement. We now live in a technologically magical society. It is increasingly controlling our daily lives and will continue to do so in the future as AI creations advance.
AI is advancing at such a rapid pace that current legal systems are incapable of dealing with it. As a result, the international community must consider the potential legal and ethical implications. “The short-term impact of AI depends on who controls it; the long-term impact depends on whether it can be controlled at all,” Stephen Hawking once said.
Given the significance of the new thinking on AI creation and Hawking’s prediction, the WTO and WIPO should give this issue careful consideration. As AI becomes more difficult to distinguish from human creativity, the legal issues surrounding authorship are bound to become more complicated in the coming years.
Numerous challenges with Artificial Intelligence law and Deep Fake:
There are legislative barriers to protecting AI-generated inventions, particularly with regard to human inventorship requirements, prior arts, examination of inventive steps, and novelty. AI-generated inventions pose numerous challenges, including determining what constitutes “prior art” for machine-generated inventions. Is it possible for ordinary skilled people, such as patent examiners, to locate ‘prior arts’ produced by sophisticated machines?
Do sophisticated machine AI examiners, rather than human patent examiners, need to be used to search for ‘prior art’? Furthermore, because inventive steps are judged on ‘non-obviousness,’ which implies the gap or improvement between the proposed invention and existing ‘prior arts,’ such differences would be difficult to measure by a person with ordinary skill in the relevant art. Furthermore, computer-generated claims may be designed in such a way that they obstruct future advances in knowledge.
Another point worth discussing is the requirement for human inventorship in a patent application. Even though the EU approach of ‘first to file’ justifies non-examination of the inventor in the true sense, failure to meet the formal requirement of human inventorship will result in patent application rejection. In contrast, the US approach of ‘first to invent’ requires disclosure of the inventor by definition, or it will face EU-style consequences.
A similar concept can be found in the copyright system, where human authorship is required and thus AI generated works are not protected by copyright.
An analogous argument can be made in line with the ratio of the Famous Monkey Selfie Case (Naruto v Slater), in which the US Court did not allow authorship to Monkeys despite selfies being taken independently by Monkeys who clearly had independent thinking abilities. The jurisprudence of the Infopaq Case (Infopaq International A/S v Danske Dagbaldes Forening) in the European Union may also exclude AI generated works from copyright protection.
The court ruled in this case that copyright is only granted to original works, and that the originality must be stamped with the “author’s personality.” Similar difficulties may be encountered in AI-generated designs.
As a result, there is ambiguity, a lack of legal precision, and policy uncertainty regarding the intellectual property protection of AI generated contents. Policymakers, including relevant stakeholders in digital Bangladesh, should consider potential legislative and policy options to protect AI industry investments and promote creativity and innovation in this burgeoning sector.
Are you intending to know more of Cyber Law and Artificial Intelligence law in Bangladesh?
Get your advice in regards to Cyber Law and Artificial Intelligence law in Bangladesh with the help of Tahmidur Rahman Remura: TRW: The Law Firm in Bangladesh:
- Bail or Anticipatory Bail for IT matters
- Filing Cyber Cell Police Complaint
- Network Deployment
- E Contracts
- Credit Card Fraud
- Online Banking Fraud & Scams
- Blocking defamatory websites, webpages
- Any other Cyber law or Artificial Intelligence law related issues
. For queries or legal assistance, please reach us at:
E-mail: [email protected]
Phone: +8801847220062 or +8801779127165 or +8801708080817
Address: House 410, Road 29, Mohakhali DOHS