Why ShapeShift Founder Erik Voorhees Is Pivoting to a Privacy-Centric AI Startup

NiceHash
Why ShapeShift Founder Erik Voorhees Is Pivoting to a Privacy-Centric AI Startup
fiverr


Cryptocurrency exchange ShapeShift founder Erik Voorhees announced on Friday the public launch of his latest venture, Venice AI, a privacy-focused generative AI chatbot.

Privacy is a critical concern for the cryptocurrency space and among artificial intelligence users—a crucial factor in the creation of Venice AI, he said.

“I saw where AI is going, which is to be captured by large tech companies that are in bed with the government,” Voorhees told Decrypt. “And that really worried me, and I see how powerful AI is, how consequential it can be—an amazing realm of new technologies.”

Large tech companies are often under the thumb of government and act as gatekeepers to AI, Voorhees lamented, something that could lead us into a dystopian world.

“The antidote to that is open-source decentralization,” Voorhees said. “Not giving monopoly power over this stuff to anyone.”

Acknowledging the important work done by OpenAI, Anthropic, and Google in pushing the field of generative AI forward, Voorhees said consumers should still have the choice to use open-source AI.

“I don’t want that to be the only option; I don’t want the only option to be closed source, proprietary, centralized, censored, permissioned,” he said. “So, alternatives need to exist.”

Voorhees launched the ShapeShift cryptocurrency exchange in 2014. In July 2021, the exchange said it would transition to an open-source decentralized exchange (DEX), with control of the exchange transferring from Voorhees to the ShapeShift DAO.

ShapeShift announced in March that it would shut down after becoming embroiled in a battle with the U.S. Securities and Exchange Commission. The exchange agreed to pay a $275,000 fine and abide by a cease-and-desist order to settle allegations that the exchange allowed users to trade digital assets without registering as a broker or exchange with the agency.

In the intervening three years, Voorhees said he had turned his attention to building a permissionless, decentralized AI model.

Venice AI does not store user data and can’t see user conversations, Voorhees said, explaining that Venice AI sends users’ text input through an encrypted proxy server to a decentralized GPU that runs the AI model, which then sends the answer back.

“The whole point of that is for security,” Voorhees said.

“[The GPU] does see the plain text of the specific prompt, but it doesn’t see all your other conversations, and Venice doesn’t see your conversations, and none of it is tied to your identity,” he said.

Voorhees acknowledged that the system does not provide perfect privacy—it is not completely anonymous and zero knowledge, but expressed the view that Venice AI’s model is “substantially better” than the status quo, where conversations are sent to and stored by a centralized company.

“They see all of it, and they have it all forever, and they tie it to your identity,” Voorhees said.

AI developers like Microsoft, Google, Anthropic, OpenAI, and Meta have worked strenuously to improve public and policymaker perceptions of the generative AI industry. Several of the top AI firms have signed onto government, and non-profit initiatives and pledges to develop “responsible AI.”

These services ostensibly allow users to delete their chat history, but Voorhees says it’s naive to assume the data is gone forever.

“Once a company has your information, you can never trust it’s gone, ever,” he said, noting that some government regulations require companies to retain customer information. “People should assume that everything they write to OpenAI is going to them and that they have it forever.”

“The only way to resolve that is by using a service where the information does not go to a central repository at all in the first place,” Voorhees added. “That’s what we tried to build.”

On the Venice AI’s platform, chat history is stored locally in the user’s browser and can be deleted, whether the user creates an account or not. Customers can set up an account with an Apple ID, Gmail, Email, Discord, or by connecting a MetaMask wallet.

There are advantages to creating a Venice AI account, however, including higher message limits, modifying prompts, and earning points—although points don’t currently serve any function apart from making it easier to track usage. Users looking for more even features can also pay for a Venice Pro account, currently priced at $49 annually.

Venice Pro provides unlimited text prompts, removes watermarks from generated images and document uploads, and allows users to “turn off Safe Mode for unhindered image generation.”

Despite the MetaMask account integration, Voorhees noted that users cannot yet subscribe to Venice Pro with digital currencies—but said it is “coming soon.” Meanwhile, because it is built atop the Morpheus Network, the company is rewarding holders of the Morpheus token.

”If you have one Morpheus token in your wallet, you get a free pro account indefinitely,” he said. “You don’t even have to pay, you just hold one Morpheus token and you automatically have the Pro account as long as that token is in your wallet.”

As they do with any tool, cybercriminals persistently develop ways to circumvent the guardrails built into AI tools to harness them to commit crimes, whether via using obscure languages or creating illicit clones of popular AI models. However, according to Voorhees, interacting with a language calculator is never illegal.

“If you were to go on Google and search for ‘how do I make a bomb?’ you can go find that information—it’s not illegal to go find that information, and I don’t think it’s unethical to find that information,” he said. “What is illegal and unethical is if you build a bomb to hurt people, but that has nothing to do with Google.

“That’s a separate action that the user is taking. so Venice in particular, or AI is generally, I think a similar principle applies,” he said.

Generative AI models like OpenAI’s ChatGPT have also come under increased scrutiny over how AI models are trained, where the data is stored, and privacy concerns. Venice AI collects limited information, like how the product is used—creating new chats, for example—but its website says the platform cannot see or store “any data about the text or image prompts shared between you and the AI models.”

For its text generation, Venice uses the Llama 3 large language model, which was developed by Facebook parent company Meta. Customers can also switch between two Llama 3 versions: Nous H2P and Dolphin 2.9.

In a Twitter Spaces after the launch of Venice AI, Voorhees praised the work Mark Zuckerberg and Meta have done in generative AI, including making the powerful LLM open source.

“Meta deserves tremendous credit for essentially spending hundreds of millions of dollars to train a cutting-edge model and just releasing it for free to the world,” he said.

Venice also allows users to generate images using open-source models Playground v2.5, Stable Diffusion XL 1.0, and Segmind Stable Diffusion 1B.

When asked if Venice AI would use services from OpenAI or Anthropic, Voorhees’ response was an emphatic no.

“We will never provide Claude LLM and never provide OpenAI’s service,” he said. “We’re not a wrapper for centralized services, we are a way to access open-source models explicitly and only.”

With Venice AI built atop the decentralized Morpheus network that powers open-source smart agents, Voorhees acknowledged that there are concerns about Venice AI’s performance. It’s something they’re focused on, he explained.

“If we want to bring private, uncensored AI to people, it has to be roughly as performant as centralized companies,” Voorhees said. “Because if it’s not, people are going to prefer the convenience of the central company.”

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.





Source link

Blockonomics
Coinmama

Be the first to comment

Leave a Reply

Your email address will not be published.


*