Skip to content

Commit

Permalink
cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
wslyvh committed Dec 18, 2024
1 parent eb83833 commit 1eff05d
Showing 1 changed file with 6 additions and 14 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,15 @@
"audience": "Product",
"featured": false,
"doNotRecord": false,
"tags": [
"Censorship Resistance",
"Permissionless",
"Privacy"
],
"keywords": [
"AI"
],
"tags": ["Censorship Resistance", "Permissionless", "Privacy"],
"keywords": ["AI"],
"duration": 457,
"language": "en",
"sources_swarmHash": "c5bb2e821c903b3a3b35c03901a6a4ecd4fbfb64f461cc95cf25ec2dcd983d8a",
"sources_swarmHash": "",
"sources_youtubeId": "B_5wj6TfX8s",
"sources_ipfsHash": "",
"sources_livepeerId": "",
"sources_streamethId": "67359da49dbb7a90e17463e3",
"sources_streamethId": "",
"transcript_vtt": "https://streameth-develop.ams3.digitaloceanspaces.com/transcriptions/67359da49dbb7a90e17463e3.vtt",
"transcript_text": " Hi, guys. I hope you're having a great DevCon. It's great to be here. I've got five minutes, so I'm going to go really fast. Bear with me. So imagine a small group of companies control access to the most powerful machine learning used by billions of people worldwide. They record all your AI conversations, monetize your data, and share it with governments on demand. Everything you share is attached to your identity forever. They record... Whoops. What's going on here? Okay. Sorry about that. Through politicized content policies, their AI models are trained to coddle and redirect you when you explore topics determined to be taboo. They restrict and redact information and influence your thinking based on their view of the truth. Does this sound like science fiction? It's not. It's already happening. The current AI development path leads to a few powerful companies controlling the technology and becoming the arbiter of truth. The race to dominate the consumer AI market is on. OpenAI and by proxy Microsoft have a head start. Their partnership puts AI in the hands of 1.5 billion iPhone users. The Biden administration's policies entrench AI development in the hands of a few powerful entities, accelerating centralization that favors incumbents. It's notable that the newly appointed U.S. Artificial Intelligence Safety and Security Board doesn't include any open source or decentralized AI leaders. I believe AI should be optionally private. Our interactions with AI are personal and intimate. This isn't the polished... This is not the right slide either. Sorry, guys. This isn't the polished social media version of us. It's raw, honest, intellectual exploration. Would you share your diary online? That's the level of vulnerability we expose when we use AI without privacy safeguards. Popular AI tools store your inputs and outputs, and in many cases, the platform owns the outputs you create. As I mentioned earlier, these inputs are attached to your identity. Platforms are vulnerable to hackers, which continually lead to breaches. Does anyone remember Equifax? Information held by governments is equally vulnerable, and the authorities making the privacy rules can't even protect their own data. A recent EU Parliament data breach exposed sensitive personal data of more than 8,000 staffers. Your data doesn't need to be leaked to expose you to manipulation. Cambridge Analytica demonstrated how information can change the tech giants, sorry, the information you share with tech giants can be scraped without your consent to create campaigns designed to influence your views. We need to stop volunteering our data and take control over what we share with AI. I believe AI should be uncensored. Centralized platforms censor according to often opaque content policies, influenced by the values of those who control the platform, hidden behind system prompts that are only revealed through jailbreaking. Users think they're interacting with true machine intelligence, a calculator programmed to do statistical inference on language. In reality, they're engaging with proprietary partiality within guardrails imposed by humans who have their own biases. At best, we receive nonsensical outputs like Gemini producing black vikings. At worst, human adulterated AI can perpetuate the silencing of public discourse. No entity, public or private, should monopolize or contextualize truth. Open access to AI is under threat. Biden's AI executive order requires licenses for large models that restrict the number of parameters allowed. California proposed an AI bill which includes criminalizing certain open source AI developments. The EU's AI Act is more permissive of open source but strict on large scale AI. The Act's author himself has raised concerns that the regulatory bar has been set too high. A stark example of excessive regulation backfiring. France-based Mistral and Meta both recently withheld their latest open source AI models from the EU, preventing an entire region from accessing advancing AI. Politics will play a role in AI. Some push for tight controls, others champion open source development. The back and forth highlights that AI's future should not hinge on political whims. When confidence in our political leaders is waning, do we really trust them to regulate intelligence? AI is being adopted rapidly. A recent Harvard study found that nearly 40% of all U.S. adults between 18 to 64 have used generative AI. But unlike money or social media, we don't need to change our existing behavior, but we do need to start as we intend to continue. We must seek out and use alternatives. Open source permissionless AI, evolved through thoughtful iteration, is impervious to political fancy, ideology, and the antidote to gate-kept, curated, and censored AI. It's pretty easy for a developer to run a small model privately, locally, using a Lama or a similar service. Open source models are rapidly becoming competitive, surpassing closed source models on many benchmarks. You can find a plethora of open source models on Hugging Face. I've listed a few here. For those who can't or don't want to run AI locally, we created Venice, a generative AI platform that embodies the principles of permissionlessness. You can chat with some of the leading open source models too large to run locally, generate images, create and interact with AI characters, and write and debug code, all in private. Venice uses decentralized infrastructure to run the platform. All of your Venice activity is stored only on your browser. Venice never shares your data with anyone. We simply can't because we never had it in the first place. Use it anonymously and for free. If you want greater amounts of inference or to access the API, you can upgrade to Pro. There's no doubt that AI would change humanity, and we should engage with it, but through mediums that are optionally private, uncensored, and permissionless. Thanks. Hello. Okay, that's amazing. This is time for question right now. We probably have time for one question. Anyone have any burning question at the moment about permission as AI? It's kind of an important part of the... Oh, we'll go. One question. How can you in that system run your own LORAS and your own, how was it called, tuned models. Can you do that? Since you say you do not have the data on your servers. Yeah. So we provide access to open source models. We host those on our service. We have a proxy service. So when you send in your prompt to Venice, it's encrypted, sent via proxy to a decentralized GPU. The response is also encrypted via proxy and sent back to you on your browser. So nothing persists on Venice servers. We don't see your prompt, we don't see the response, and it's never stored. So I cannot upload my own LORAs after I did my own training? You can't upload your own LORuras for training, but just this week, hopefully later today, so I might be front-running myself, we are going live with image-to-image. So you'll be able to upload your own images.",
"eventId": "devcon-7",
Expand All @@ -32,7 +26,5 @@
"slot_roomId": "stage-4",
"resources_presentation": "https://docs.google.com/presentation/d/1kklsZ1YE71cdtzZNkgKNXlsh133eDOoZO3-I29W9u9s",
"resources_slides": "https://drive.google.com/file/d/1oduMhD9MDwrPtDQAGfBx888DIF9nj_f7/view",
"speakers": [
"teana-baker-taylor"
]
}
"speakers": ["teana-baker-taylor"]
}

0 comments on commit 1eff05d

Please sign in to comment.