2023-1-18 22:00 |
Smart contract bug bounty platform Immunefi banned 15 people for allegedly submitting bug reports created by the generative artificial intelligence tool ChatGPT.
The whitehat hacker bounty platform insisted that ChatGPT could not identify bugs because it has no technical capability beyond providing answers to human inquiries.
ChatGPT Should Not Replace Whitehat ReportAccording to Immunefi, whitehats can expedite a speedy resolution to a software bug by describing the problem in their own words rather than through artificial intelligence language tools.
Here's ChatGPT on why you shouldn't use ChatGPT to generate and submit bug reports.
Additional reminder that submitting ChatGPT bug reports on Immunefi will get you banned because the output is never accurate or relevant. pic.twitter.com/nOvVOmQVmG
Immunefi is a bug bounty platform that rewards whitehats for finding problems with the smart contracts powering decentralized finance projects like Aave, Compound, and Synthetix. By Sep. 2022, the platform had paid whitehats $65 million, with an additional $138 million available for future payouts.
Software project owners hire whitehats to ethically test the security of their product in exchange for a bounty. These “good” hackers are contrasted with blackhats, who criminally exploit security flaws. On the other hand, so-called greyhats find bugs without the project owner’s permission.
Still, the bounty platform said that any genuine bugs highlighted by the tool should be reported through the proper channels.
#ImmunefiStats
We've permanently banned 15 people so far for submitting ChatGPT reports.
ChatGPT uses a large-language model called GPT-3 to converse naturally with humans. Its ace card is its ability to answer questions by focusing on a question’s intent more than its words. For context, mainstream search engines generally rank results according to the quantity and quality of links to a web page.
ChatGPT CEO Says the Tool is Still a Work in ProgressBy its own admission, ChatGPT’s sometimes biased training data means that its answers often lack common sense and context. Furthermore, its articulation can sometimes embellish low-quality information. A moderator at the programming forum StackOverflow recently confirmed that the tool’s polished presentation often successfully disguises inaccurate answers.
Like Immunefi, StackOverflow banned ChatGPT responses from its platform.
Despite these glaring limitations, the chief executive of OpenAI, the company behind ChatGPT, is confident that the tool will evolve into a competent workplace tool.
“We can imagine an ‘AI office worker’ that takes requests in natural language like a human does,” said Sam Altman in a blog post last year. Smart contract auditing platform CertiK noted that the platform wasn’t ‘half-bad’ at finding bugs. At the same time, Canadian software engineer Tomiwa Aswmidum reportedly successfully used the tool to create a crypto wallet by teaching it cryptographic rules.
Dogecoin promoter and Tesla CEO Elon Musk said in Dec. 2022 that ChatGPT’s AI is “scary good,” while Next47 venture capitalist Kate Reznykova pointed out ChatGPT’s staggering user adoption rate of one million users in just five days.
ChatGPT is scary good. We are not far from dangerously strong AI.
— Elon Musk (@elonmusk) December 3, 2022Time it took to reach 1 million users:
Netflix – 3.5 years
Facebook – 10 months
Spotify – 5 months
Instagram – 2.5 months
ChatGPT – 5 days
However, Altman has cautioned against reading too much into ChatGPT’s developing abilities, calling it a “preview of progress” and adding that it shouldn’t be used for anything mission-critical.
It’s a mistake to be relying on it for anything important right now,” he said.
For Be[In]Crypto’s latest Bitcoin (BTC) analysis, click here.
The post ChatGPT Can Do a Lot, but It Can’t Help You With White Hat Reports appeared first on BeInCrypto.
origin »