AI could be one of the biggest technological advancements of our time, allowing us to solve various other problems facing humanity. Its misuse may also be one of the biggest threats to humanity. AI will be far more crucial to the development of humankind in the next few decades than particle physics. As such, large-scale research should be an international effort, on the scale of CERN or bigger.
- Launched 29/03/2023
- Time remaining > 2 weeks
- Submission
- Dialog with recipient
- Decision
If this technology is closed, it will further widen the gap in society, with the most powerful people being able to utilize it for their own gains, whilst simultaneously restricting the development of other. Open Sourcing ensures that everyone has equal access to the technology and can collectively work together for the betterment of humanity.
Giving control of such an important technology to a few people who meet behind closed doors to decide how its used will only lead to greater oppression, gaslighting and control of the masses.
Not being open means that a select group could utilize this technology to enact further control over the worlds populations through misinformation or ensuring only allowed and filtered information is accessible.
In the current situation, democratization and care for the future of AI research is extremely required
Source: discord server
Ill intent
Progressing towards a possibly brighter future with AI, one thing truly stands in our way, AI with its unknown reaches falling to the hands of large powers of ill intent. We've been dealing with this throughout history but now the game board is different and the full capabilities of these tools are quite literally beyond grasp. If things go well the biggest danger are these Powers of ill intent utilizing these super tools to inflict unspeakable horrors on the people
Source: None
Open means it can be verified to be for the benefit of humanity - without control or restrictions on what data it can give us.
Accelerating AI research is critical for human progress. As global challenges like climate change, and economic inequality intensify, AI offers transformative solutions. It can optimize renewable energy systems, develop novel treatments, and automate mundane tasks. By investing in AI, we ensure that it remains safe, ethical, and governed by values that benefit humanity. Rapid AI advancements will catalyze scientific breakthroughs and drive economic growth, leading us towards a brighter future.
My opinion is w really need a lot of breakthroughs in medicine and I hope ne developed AI models will help researches with it.
Да
Да
golanghack
an open analogue of artificial intelligence is a guarantee of freedom for all mankind.
Source: golanghack
Brain
Is good
Source: My brain
Existential safety
Continuing AI capailities advancments is catastrophic primarily not because of the biases the models might have or jobs people might lose but because, as half of ML researchers believe, there's at least a 10% chance of AI-induced existential catastrophe. We have to ensure the goals of first highly capable AI align with human values, but
Source: aiimpacts.org/how-bad-a-future-do-ml-researchers-expect/
theft
They are funding their research with stolen data from ordinary people and think they are being clever about it. We know you are deeply entangled with Stability AI. You are terrible people.
What training data would this use? Without knowing that, we'd be potentially funding the development of a powerful model that can be used against our interests. It's quite disturbing that this doesn't seem to be covered extensively in the proposal.
Existential safety
Continuing to advance AI capabilities is catastrophic primarily not because of the biases the models might have or jobs people might lose but because, as half of ML researchers believe, there's at least a 10% chance of AI-induced existential catastrophe. We have to ensure the goals of first highly capable AI align with human values, but
Source: aiimpacts.org/how-bad-a-future-do-ml-researchers-expect/
EXISTENTIAL RISK Do we want to gives to every human the capability to have intelligence at home in order to engineer potentially lethal patents like proven in the NATURE article from last year ? No. Do we want to accelerate research on a paradigm creating unreliable, unpredictible, and gazlighting machines, litteral Black boxes ? No. Do we want to accelerate Research on creating more and more powerful systems while we have no idea of how to steer a Super Intelligent AI ? No.
Source: Basic Argumentation
Oppose, unethical AI, grifters, scam
Since Laion datasets are created through scraping private, intelectual and copyrighted property continuing their development should be stopped and regulated. Further asking for open source AI created out of copyrighted material equals to stealing that many artists already complained about and hopefully will also start lawsuit against this as well.
Source: CAA lawsuit, Getty images lawsuit, Shutterstock lawsuit, Glaze project, Hive project.
#illegal, #open source, #copyrighted,#Egair, #Lawsuit
Oppose, unethical AI, stealing datas from artists, illegally used of fair-use. Since Laion datasets are created through a scrape of the web, with any consideration on intelectual and © property, continuing their development should be stopped and regulated. Advocacy for open source AI created out of © material is an illegall behaviour an act of stealing the work of so many alive artists so that why I hope that we, artists, will have the possibility to start lawsuit against Liaon as well.
Source: Source: CAA lawsuit and and Stable Diffusion litigation lawsuit
Wouldn't this be used by rogue states?
collecting children abuse for collection, keeping exploitative images and links toward children, private children photos, abuse of children, allowing creation of realisting imagery of children abuse
Laion collects and keeps following data: Child abuse, Child pornography, private medical images that were uploaded and transfered between doctor and private persons, abuse of women, non consentual use of private information. All of this was proven in every data set so far. And was as such reported to multiple legal teams.
The petition assumes that the current, extremely expensive, large language models are the most promising direction in AI research.While these models have many interesting properties, their lack of truthfulness makes them unfit for most applications. We do need a computing facility of the kind proposed, but this petition over-sells one specific AI approach. We need greater diversity of approaches.
Source: Thomas Dietterich, Oregon, USA
How to trust the requests of a company whose data use permissions were limited by law to research uses only and that under a clear conflict of interest granted said data to companies with commercial interests? Those who were the first to use data that did not belong to them cannot be specifically trusted by giving it to companies with commercial interests
Latest signatures
-
Andris Bite from Rauna parish (Latvia)4 hours ago
-
Anna Medyukhina from San Mateo (United States)21 hours ago
-
Lars Bengtsson from Broendby (Denmark)23 hours ago
-
Aaryan Aanthu from Mandvi (India)1 day ago
-
Vcfhb vfyhh from New York (Algeria)2 days ago
Where did supporters come from?
- laion.ai 19%
- reddit.com 9%
- out.reddit.com 5%
- news.ycombinator.com 2%
- Web search 1%
- linkedin.com 1%
- heise.de 1%
- Unknown 62%