• Lior's View
  • Posts
  • 🚨 Preventing Rogue Superintelligence

🚨 Preventing Rogue Superintelligence

On Ilya Sutskever's new focus, Google injecting $2B in Anthropic, the end of GPT wrappers, and the new Pytorch conference.

AlphaSignal

Hey ,

Welcome to this week's edition of AlphaSignal.

Whether you are a researcher, engineer, developer, or data scientist, our summaries ensure you're always up-to-date with the latest breakthroughs in AI.

Let's get into it!

Lior

In Today’s Email:

  • Releases and Announcements in the Industry

  • Jina AI Introduces The First Open-Source 8K Text Embedding Model

  • OpenAI forms new team to assess “catastrophic risks” of AI

  • Ilya Sutskever wants to Prevent Rogue Superintelligence

Read Time: 4 min 38 sec

RELEASES & ANNOUNCEMENTS

1. Google Commits $2 Billion in Funding to AI Startup Anthropic
Google has invested $500 million upfront in AI startup Anthropic, known for its chatbot Claude, with an agreement to further invest 1.5 billion. This move is expected to strengthen Google’s position in the chatbot market.

2. The end of GPT wrappers? ChatGPT adds PDF Analysis and Automatic Tool Switching
ChatGPT Plus now allows users to upload PDFs for data analysis and removes the need for manual tool selection by automatically switching modes based on the user's needs.

3. Anthropic, Google, and Others Launch $10 Million AI Safety Initiative
Anthropic, Google, Microsoft, and OpenAI have invested over $10 million to create a new AI Safety Fund, aiming to advance research in the responsible and safe development of frontier AI models. This move signifies a major industry-wide effort to raise safety standards and proactively address the challenges posed by advanced AI systems.

4.All Talks from the PyTorch Conference are now available on YouTube
Catch up on the latest in PyTorch development with talks from experts. Topics include PyTorch 2.0, code linters, performance acceleration, edge computing, and efficient inference.

5. MIT, Cohere for AI, others launch platform to track and filter audited AI datasets
MIT, Cohere for AI, and 11 other institutions have introduced the Data Provenance Platform, aiming to address the transparency crisis in AI. As a part of this initiative, nearly 2,000 common fine-tuning datasets have been audited, with data tagged to original sources.

Magically Create Documentation With AI

Tired of explaining the same thing over and over again to your colleagues? It’s time to delegate that work to AI. guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI generated documentation.

  • Turn boring documentation into stunning visual guides

  • Save valuable time by creating video documentation 11x faster

  • Share or embed your guide anywhere for your team to see

Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to actions.

The best part? The extension is 100% free.

NEWS
Jina AI Introduces The First Open-Source 8K Text Embedding Model

What's New?
Jina AI has launched ‘jina-embeddings-v2’, the first and only open-source text embedding model that supports an extensive 8K token context length. It rivals OpenAI's ‘text-embedding-ada-002’ (Ada) model in performance across various tasks, including classification, reranking, retrieval, and summarization.

Why Does It Matter?
Jina-embeddings-v2's 8K context length significantly improves performance in scenarios where understanding the broader context is essential for accurate conclusions. Furthermore, its open-source nature ensures ongoing development and innovation in this domain.

Key Takeaways:

  1. Open-Source: Free to use and able to be run locally, in contrast to OpenAI’s proprietary Ada model, promoting community-driven development.

  2. Competitive Performance: Delivers performance on par with the Ada model across various tasks.

  3. Extended Context: The 8K context length enables detailed text analysis, unlocking applications in healthcare, law, and finance.

NEWS
OpenAI forms new team to assess “catastrophic risks” of AI

What's New?
OpenAI has established the Preparedness team and rolled out a challenge, aiming to enhance the safety measures for advanced AI systems. This initiative is part of a broader commitment to navigate the complexities and potential risks associated with increasingly capable AI technologies.

Why Does It Matter?
As deep learning models grow in capability, ensuring their safety becomes crucial. OpenAI’s Preparedness team will assess capabilities, evaluate risks, and implement mitigation strategies across potentially catastrophic risks spanning individualized persuasion, cybersecurity, and chemical threats.

Main Takeaways

  • Preparedness Team: Dedicated to evaluating capabilities and mitigating risks for cutting-edge AI models, aiming to understand and safeguard against possible catastrophic risks.

  • Risk-Informed Development Policy: Developing guidelines to ensure rigorous evaluation, monitoring, and governance of AI models, enhancing safety and alignment.

  • Preparedness Challenge: An initiative encouraging the identification of potential risk areas, with $25,000 in API credits for the top 10 submissions.

NEWS
OpenAI’s Co-Founder, Ilya Sutskever, Wants to Prevent Rogue Superintelligence

What's New?
In an new interview Ilya Sutskever, OpenAI’s co-founder and chief scientist, announced he has shifted his focus to preventing AGI from becoming a threat.

He and fellow scientist Jan Leike are launching a team dedicated to "superalignment," which aims to create a set of fail-safe procedures for future AGI technologies. OpenAI plans to dedicate a fifth of its computing resources to solve this problem in four years.

He believes ChatGPT just might be conscious (if you squint) and the world needs to wake up to the true power of the technology his company and others are racing to create. Plus, he envisions a future where humans opt to integrate with machines.

“It’s going to be monumental, earth-shattering. There will be a before and an after.”

Why Does It Matter?
The pivot to focus on AGI safety comes at a time when the technology is debated both for its potential benefits and risks. Sutskever believes that AGI can revolutionize healthcare, combat climate change, and more. However, he also warns that as AI systems become more capable, they will be more challenging for humans to assess.

The concept of "superalignment" aims to make sure that AGI does what we intend and nothing more, addressing these concerns.

Main Takeaways

  • Change in Priorities: Sutskever is dedicating his efforts to mitigate risks associated with superintelligent AI.

  • Significant Impact: He foresees a massive change in the world due to AI, stressing the importance of preparedness.

  • Possibility of Conscious AI: He does not dismiss the idea that large language models like ChatGPT operate like Boltzmann brains, adding complexity to discussions about AI’s development and ethical considerations.

How was today’s email?

Not Great      Good      Amazing

Thank You

Igor Tica is a writer at AlphaSignal and a Research Engineer at SmartCat, with main expertise in Computer Vision. Passionate about contributing to the field and seeking opportunities for research collaborations that span Self-supervised and Contrastive learning.

Jacob Marks is an Editor at AlphaSignal and an ML Engineer at Voxel51. He is a Top Writer in AI on Medium and a LinkedIn Top Voice in AI. Prior to joining Voxel51, Jacob worked at Google X, Samsung Research, and Wolfram Research. In 2022, he completed his Ph.D. in Theoretical Physics at Stanford.

Want to promote your company, product, job, or event to 150,000+ AI researchers and engineers? You can reach out here.