General AI News

Copyright concerns create the need for a viable alternative in the AI ​​sector

When future generations look back at the emergence of artificial intelligence technology, the year 2025 will be remembered as a turning point, when the industry took concrete steps towards greater inclusion and embraced a decentralized structure that recognizes and fairly compensates all stakeholders.

The growth of AI has already fueled change in multiple industries, but the accelerating pace has also raised concerns around data ownership, privacy and copyright infringement. Because AI is concentrated with the most powerful models controlled by corporations, content creators are largely sidelined.

OpenAI, the world’s most prominent AI company, already is Granted that is the case. In January 2024, he told the UK’s House of Lords Communications and Digital Select Committee that he would not have been able to create his iconic chatbot, ChatGPT, without training it on copyrighted material.

OpenAI launched ChatGPT everything that was posted on the public Internet before 2023, but no compensation was paid to the people who created that content – ​​much of which is copyrighted; A major source of controversy.

There is an opportunity for decentralized AI projects proposed by ASI Alliance To offer an alternative way of AI model development. The Alliance is building a framework that gives content creators a mechanism to maintain control over their data, with mechanisms for fair rewards if they choose to share their content with AI model makers. It is a more ethical basis for AI development and 2025 may be the year it gets more attention.

AI’s Copyright Conundrum

OpenAI isn’t the only AI company accused of copyright infringement. Most AI models purport to be open-source, such as Meta Platforms’ Llama 3 model. Guilty of scraping Public internet for training data.

Ignoring the fact that most of the content is copyrighted, AI developers routinely help with whatever content they find online. Copyright laws Designed to protect creators of original works such as books, articles, songs, software, artwork, and photos from exploitation and to make unauthorized use of such content illegal.

The likes of OpenAI, Meta, Anthropic, StabilityAI, Perplexity AI, Cohere and AI21 Labs ‘claim around the law’.fair use,’ a reference to an ambiguous clause in copyright law that allows limited use of protected material without obtaining permission from the creator. But in reality there is no clear definition of what ‘fair use’ is, and many authors claim that AI threatens their livelihoods.

Many content creators have resorted to legal action, including a Leading lawsuits was entered by New York Times Against OpenAI. In the suit, the times Alleged that OpenAI violated copyright when it used thousands of articles to train its massive language model. The media organization claims that such a practice is illegal, as ChatGPT is a competing product aimed at stealing audiences. times website

Due to litigation a Discussion – Should AI companies be allowed to continue using any content on the internet, or should they be forced to ask for permission first, and compensate those who create the training data?

The consensus seems to be shifting towards the latter. For instance, said the late former OpenAI researcher Suchir Balaji times in interview that he was tasked with leading the collection of data to train ChatGPT’s models. He said his job involved scraping content from every possible source, including user-generated posts on social media, pirated book archives and articles behind paywalls. He said all the material was scrapped without seeking permission.

Balaji said he initially bought OpenAI’s argument that scraping is fair use if the information is posted online and freely available. However, he said that later, he started questioning the trend after realizing that products like ChatGPT could harm content creators. Ultimately, he said, he could no longer justify the practice of scraping data, resigning from the company in the summer of 2024.

A growing case for decentralized AI

Balaji’s departure from OpenAI seems to coincide with a realization among AI companies that the practice of helping themselves to any content found online is unsustainable and that content creators need legal protection.

This is evidenced by the rise of Content licensing deals Announced last year. OpenAI has agreed to deals with a number of high-profile content publishers, including Financial Times, Newscorp, Condé Nast, Axel Springer, Associated Pressand Reddit, which hosts millions of pages of user-generated content on its forums. Other AI developers like Google, Microsoft and Meta have formed similar partnerships.

But it remains to be seen whether these arrangements will prove satisfactory, especially if AI companies generate billions of dollars in revenue. While the terms of content licensing deals are not disclosed, Information they claim worth a few million dollars per year Especially considering former OpenAI Chief Scientist Ilya Sutskever Paid a salary of $1.9 million In 2016, the money offered to publishers may fall short of what content is actually worth.

There’s also the fact that millions of small content creators — like bloggers, social media influencers, etc. — are left out of the deal.

Arguments surrounding AI’s copyright infringement are likely to go years without resolution, and the legal ambiguity surrounding data scraping, coupled with a growing belief among practitioners that such practices are unethical, is helping to strengthen the case for a decentralized framework.

Decentralized AI frameworks offer developers a more theoretical model for AI training where the rights of content creators are respected and where each contributor is fairly rewarded.

At the heart of decentralized AI sits the blockchain, which enables the development, training, deployment and management of AI models across distributed, global networks owned by everyone. This means that everyone can participate in creating AI systems that are transparent, unlike centralized, corporate-owned AI models that are often described as “black boxes.”

As arguments surrounding AI copyright infringement intensify, decentralized AI projects are making inroads; This year promises to be an important one in the shift towards more transparent and ethical AI development.

Decentralized AI in action

In late 2024, three blockchain-based AI startups formed the Artificial Superintelligence (ASI) Alliance, an organization working toward creating a “decentralized superintelligence” to power advanced AI systems that anyone can use.

The ASI Alliance says it is the largest open source, independent player in AI research and development. was created by SingularityNETwhich developed a decentralized AI network and computation layer; Fetch.aifocused on creating autonomous AI agents that can perform complex tasks without human assistance; And Ocean ProtocolCreator of transparent exchange for AI training data.

The ASI Alliance’s mission is to provide an alternative to centralized AI systems with an emphasis on open-source and decentralized platforms including data and computational resources.

To protect content creators, the ASI Alliance is building an exchange framework based on Ocean Protocol technology, where anyone can contribute data to be used for AI training. Users will be able to upload data to the blockchain-based system and maintain ownership of it, earning rewards whenever it is accessed by AI models or developers. Others will be able to contribute by helping to label and annotate data to make it more accessible to AI models, and receive rewards for doing this work. In this way, the ASI Alliance promotes a more ethical way for developers to obtain the training data they need to build AI models.

Shortly after its formation, the Alliance began But initiativeFocused on the development of more transparent and ethical “domain-specific models” specializing in fields such as robotics, science and medicine. Its first model is Cortex, which is said to be based on the human brain and designed to power autonomous robots in real-world environments.

Specialized models differ from general-purpose LLMs, which are great for answering questions and creating content and images, but are less useful when asked to solve more complex problems that require significant expertise. But building specialized models will be a community effort: the ASI Alliance needs industry experts to provide the data needed to train the models.

Humayun Sheikh, CEO of Fetch.ai, said ASI Alliance’s decentralized ownership model creates an ecosystem “where individuals support groundbreaking technologies and participate in value creation.”

Users without specific knowledge can purchase and “stake” FET tokens to become part-owners of decentralized AI models and receive a share of the revenue they generate when used by AI applications.

For content creators, the benefits of a decentralized approach to AI are clear. ASI’s framework allows them to keep control of their data and track when it is used by AI models. It integrates mechanisms encoded in smart contracts to ensure that everyone gets fair compensation. Participants earn rewards for contributing computational resources, data and expertise or for supporting the ecosystem through staking.

The ASI Alliance operates a model of decentralized governance, where token holders can vote on key decisions so that projects evolve for the benefit of stakeholders rather than shareholders of corporations.

AI is essential for everyone

The advances made by decentralized AI are exciting, and they come at a time when they are needed. AI is evolving rapidly and centralized AI companies are currently at the forefront of adoption; For many, a major cause of concern.

Given the transformative potential of AI and It poses a threat to individual livelihoodsIt is important that the industry shifts to a more responsible model. AI systems should be developed for the benefit of everyone and this means that every contributor is rewarded for participation. Only decentralized AI systems have shown that they can do this.

Decentralized AI is not just a nice thing but a necessity, representing the only viable option capable of breaking the big tech stranglehold on creativity.

Tags: AI, artificial intelligence, Machine learning


https://www.artificialintelligence-news.com/wp-content/uploads/2024/12/ai-artificial-intelligence-machine-learning-research-privacy-ethics-development.jpg

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button