AI Under Scrutiny: Privacy, Bias, and the Push for Decentralization

AI Under Scrutiny: Privacy, Bias, and the Push for Decentralization
ôîòî ïîêàçàíî ñ : zycrypto.com

2024-7-4 15:32

Artificial Intelligence (AI) is a powerful and transformative technology, yet it still poses a huge threat to disrupt human civilization if handled incorrectly. ChatGPT, the pioneering large language model (LLM), enjoys a user base of around 180 million monthly users. This signals that generative AI tools have become everyday applications, with people from all walks of life using the technology to enhance their productivity or creativity.

But amidst this strong adoption lies the danger: What happens when rogue actors decide to misuse generative AI to advance their agendas? There are already concerns that some actors use generative AI to interfere with elections by deceiving voters with deep fakes and other AI-generated content.

According to a recent federal bulletin by U.S. security agencies, generative AI is one of the threats in the upcoming presidential election slated for November 2024.

“A variety of threat actors will likely attempt to use generative artificial intelligence (AI) – augmented media to influence and sow discord during the 2024 U.S. election cycle, and AI tools could potentially be used to boost efforts to disrupt the elections,” noted the bulletin. 

Well, that’s just the tip of the iceberg. The most significant pitfall of LLMs and other generative AI tools is the companies behind these innovations. A closer look at the industry’s ongoing developments points to a situation where big tech is positioning itself to continue controlling the masses.

ChatGPT Scrutinized for Privacy Violations 

Last year, ChatGPT’s parent company, OpenAI, came under scrutiny from various regulators around the world, including the U.S. Federal Trade Commission (FTC). The FTC requested that OpenAI provide detailed information about its data security arrangements and privacy safeguards. Meanwhile, in Italy, regulators had at some point blocked ChatGPT due to privacy concerns, a move that triggered other E.U. watchdogs to put ChatGPT in the spotlight.

What’s more worrying, however, are recent criticisms from notable figures such as NSA whistleblower Edward Snowden, who warned against ChatGPT following OpenAI’s appointment of a former NSA director to the company’s board. Elon Musk has also gone public, criticizing a potential collaboration between OpenAI and Apple, which he believes will lead to a violation of privacy rights.

Google’s Gemini Accused of Racial Bias 

Google’s Gemini LLM is another case example of the bias that generative AI may subject consumers to, knowingly and unknowingly. This AI tool received a lot of backlash for generating images that hinted at some degree of racial bias. Users complained that Gemini failed to generate images of white people who depicted certain historical events.

Instead, this AI LLM was skewed towards generating images of Black people even when it was clearly out of context. A good example is when a user prompted Gemini to generate images of America’s founding fathers; the AI-generated images of Black women. What’s more, Google recently alerted Gemini users not to share any confidential information with the LLM, as human annotators regularly read and label the communications to enhance the model.

Microsoft’s Recall Feature Dubbed a ‘Spyware’  

Microsoft’s latest Windows version has also come under attack for one feature within its newly fitted AI assistant, Copilot. The feature ‘Recall’ is designed to record one’s screen activity, allowing users to easily replay a specific event or revisit certain tasks, similar to the operations of photographic memory.

This has not sat well with several critics who have likened Recall to spyware. Several users on Twitter (formerly X) have voiced their concerns, noting that while the data is locally stored on the computer only, it poses a danger if one loses their device or is ambushed by regulators seeking to extract information.

Can AI Developments Be Democratized? 

While the democratization of AI innovations might be a long way off, given the influence and access to trainable data that the pioneering companies (big tech) currently have, it is possible to decentralize the innovations in this realm.

One way to achieve this is by integrating blockchain technology, which is fundamentally designed to operate democratically so that no single authority or point of failure can compromise the network.

For example, in the Qubic AI-powered Layer 1 blockchain, the ecosystem is powered by a decentralized utility coin dubbed $QUBIC. This native coin is designed as an ‘energy unit’ that essentially powers smart contract operations and other services within the Qubic platform. Unlike centralized AI developments, anyone can contribute to Qubic’s operations by purchasing on exchanges such as gate.io or participating in Qubic’s Useful Proof-of-Work (PoW) mining ecosystem.

Another way to introduce democracy in AI is through self-regulation. As it stands, most regulators worldwide are playing catch-up, making it very hard to regulate the AI industry. However, if the players in this sector agreed to some form of self-regulation, it would be more harmonized.

In fact, last year, seven companies, including OpenAI, Microsoft, Google, Meta, Amazon, Anthropic, and Inflection, agreed to a self-regulatory document to ensure trust, privacy, and safety. A step in the right direction, but what happens when all these players collude to breach consumer privacy or advance political and societal biases? That is the caveat with self-regulation.

Conclusion 

As highlighted in this article, the centralization of AI poses a risk to the core tenets of today’s society. On one hand, it could be used to infiltrate the moral fabric, while on the other, AI is a threat to political democracy itself.

The beauty, however, is that, like the internet, we still have an opportunity to integrate moderation in AI developments. More importantly, we have advanced technologies such as blockchain, which could serve as the fundamental block in introducing a transparent and unbiased future that can truly benefit society.

Similar to Notcoin - Blum - Airdrops In 2024

origin »

POLY AI (AI) íà Currencies.ru

$ 9.75E-5 (+0.00%)
Îáúåì 24H $0
Èçìåíåèÿ 24h: 0.00 %, 7d: 0.00 %
Cåãîäíÿ L: $9.75E-5 - H: $9.75E-5
Êàïèòàëèçàöèÿ $223 Rank 99999
Äîñòóïíî / Âñåãî 2.282m AI

pioneering large language chatgpt incorrectly handled model

pioneering large → Ðåçóëüòàòîâ: 2