03/20/2024 / By Zoey Sky
Fox Corporation, one of the United States’ largest mass media companies, has launched a blockchain platform that allows firms to track how their content is being used online. The platform also allows users to verify the source of what they see.
The Verify platform, which was developed in partnership with blockchain Polygon Labs, a blockchain company, was designed to be a game changer for online media.
The platform lets users verify content, history and original sources through an open-source protocol. In theory, Verify will operate as an internet database of media content that is cryptographically signed to establish content origin and history.
In a statement, Polygon Labs explained that the Verify platform will solve a problem that “needs to be met head-on,” especially because there is currently a lack of “real-world solutions that prove the provenance of any given piece of content.”
However, Verify doesn’t seem to verify the accuracy of the content itself, just the original source.
Polygon Labs also said that while advances in artificial intelligence have developed very quickly, these platforms have one downside – the rise of AI-generated media, which has often made it more difficult to differentiate real from fake articles, audio and images.
The blockchain company added that with Verify, readers can easily confirm if “an article or image that purportedly comes from a publisher in fact originated at the source.”
Verify can also validate a technical bridge between media companies and AI platforms. If the platform functions as advertised, media companies can register content on the platform to verify that it’s theirs.
And once the content is verified, usage rights can be granted to AI platforms that want to use the content to train language models that support apps like ChatGPT.
Fox and other clients who use the platform could use it to negotiate licensing deals for content with AI companies. (Related: Something BIG is about to go down… multiple confirming signals point to cyber takedown of financial system.)
Polygon Labs said that as more AI-generated text and images are posted online, Verify can help consumers “identify the veritable source of content” and give media publishers “more control over relationships with AI platforms scraping the web.”
In a statement, Fox Chief Information Security Officer Melody Hildebrandt announced that Fox privately launched a beta version of the Verify protocol in August 2023. The company then started uploading content to the database during Fox News‘ first Republican primary presidential debate last year.
Since then, more than 80,000 pieces of content have been assigned to Verify from various sources like Fox Business, Fox News, Fox Sports and the media company’s local television stations.
Ultimately, the plan is to check all Fox content through the Verify protocol, including Fox‘s entertainment content.
Despite the alleged convenience of AI technology, its use has been a cause for concern among both governments and tech leaders since ChatGPT emerged in the public sphere when it became available for widespread use on Nov. 30, 2022.
Experts are also very concerned about AI-generated deepfakes. The video tech produces computer-generated images that are so well-made that they are usually hard to distinguish from actual footage. The film industry has even started using deepfakes to de-age older actors.
AI-generated deepfakes also have a darker side that can be used to manipulate events on a global scale.
A fake and heavily edited video showing Ukrainian President Volodymyr Zelenskyy telling his soldiers to lay down their weapons and surrender was spread in 2022. While the fake video was debunked very quickly, it could have had serious consequences for the Ukrainian war effort if many people believed it.
In December 2023, the New York Times filed a lawsuit against OpenAI and Microsoft because of the alleged unauthorized use of its content for training AI chatbots.
For most of 2023, the Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists were on strike over concerns that AI tools would be used to replace human actors and staff, significantly rendering people expendable.
Some major social media companies have taken action to control the short-term use of AI.
Meta, the company that owns the social media platform Facebook, has allegedly taken steps to limit the use of AI during the 2024 presidential election. Meta has forbidden political campaigns and advertisers in regulated industries from using its new generative AI advertising products.
Additionally, YouTube has announced plans to introduce updates that will inform viewers when the videos they’re watching are created using AI.
Watch this episode of the “Health Ranger Report” as Mike Adams, the Health Ranger, discusses the ongoing misuse and weaponization of artificial intelligence technology.
This video is from the Health Ranger Report channel on Brighteon.com.
Binance cryptocurrency exchange pleads guilty to violating U.S. anti-money laundering laws.
The Health Ranger denounces DISHONEST and UNRELIABLE fiat currency banking system.
RFK Jr. vows to back the U.S. DOLLAR with BITCOIN if elected president.
Sources include:
Tagged Under:
AI, artificial intelligence, blockchain, ChatGPT, computing, content, content verification, cyber war, cyborg, Fact Check, faked, Fox Corporation, future science, future tech, Glitch, information tecnology, inventions, journalism, mainstream media, media sources, news cartels, Polygon Labs, robotics, robots, verify
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 INFORMATIONTECHNOLOGY.NEWS