Open vs. closed source: Stability AI CEO weighs in on A.I. debate

CNBC Television
17 May 202308:47

TLDRThe founder and CEO of Civility.AI discusses the company's approach to AI-generated images and the challenges of data sets and copyright. Civility.AI has opted for an open model, collaborating with universities and focusing on safety and customizability. The company addresses legal and business model concerns, emphasizing transparency and the importance of stable models. They also highlight the potential for AI in video and audio, advocating for national standards to regulate AI-generated content.

Takeaways

  • 🚀 Civility.AI, founded by Steve and Diedra, aims to improve AI-generated images and models by addressing data quality and customization issues.
  • 📊 The company released a generation AI last August, which has been integrated into several top avatar apps, but they acknowledge the need for better datasets.
  • 📝 Civility.AI has a debate on the use of open-source versus closed models, focusing on safety, customizability, and the unknown biases in current datasets.
  • 💡 To address copyright concerns, Civility.AI has opted out of using 169 million images from their data center, taking a proactive approach to legal and ethical issues.
  • 🔍 The company employs 78 full-time AI engineers and collaborates with 200 universities to ensure stability and innovation in their models.
  • 🛠️ Civility.AI's business model includes an eight-digit revenue stream through their Bedrock service, which involves taking open models to clients' clouds and partnering with Amazon.
  • 🔒 The company emphasizes the importance of transparency in AI, with plans to show their open training process to demonstrate how their models are developed.
  • 📚 Civility.AI's stance on open-source models is that they are necessary for private data, allowing clients to own and control the models without sharing their data with others.
  • 🎥 The company is working on video generation capabilities, aiming to create Hollywood-level movies in real-time, and emphasizes the need for standards around AI-generated content.
  • 📈 The direction of policy regarding AI is expected to move towards regulation, with a focus on balancing innovation and safety, and a six-month window for establishing better practices.

Q & A

  • What is the main issue with the current AI-generated images according to the founder of Civility.AI?

    -The main issue is that the data sets used for these images are not good enough, leading to models that are also not good enough. The founder emphasizes the need for better data sets for customization and safety.

  • How does Civility.AI address the copyright concerns related to AI-generated images?

    -Civility.AI has opted to remove 169 million images from their data center to avoid potential copyright issues, which they consider the reasonable and right thing to do.

  • What is the business model of Civility.AI?

    -Civility.AI is a business with an eight-digit revenue. They offer their open models to clients' clouds and have a partnership with Amazon where they participate in the upside.

  • How does Civility.AI ensure the stability and reliability of their AI models?

    -Civility.AI builds a series of stable models that are measured and only built by their team. They also collaborate with 200 universities to stimulate academic innovation, which does not directly affect the commercial models.

  • What is Civility.AI's stance on transparency in AI model training?

    -Civility.AI is committed to transparency, planning to show their open training process and models to ensure that they are not black boxes.

  • How does Civility.AI differentiate itself from other AI companies like OpenAI and Google?

    -Civility.AI differentiates itself by being open source, which is beneficial for private data, and by offering a business model that allows clients to own and customize the models.

  • What are the future plans for Civility.AI in terms of AI development?

    -Civility.AI is working on video and audio, aiming to generate Hollywood-level movies live in the next few years. They are also focusing on creating standards around AI-generated content.

  • What is the founder's opinion on the direction of AI policy after recent hearings?

    -The founder believes that AI policy will move towards a regulated entity, but the challenge lies in balancing innovation and regulation to prevent dangerous outcomes.

  • How does Civility.AI handle the issue of deep fakes and the authenticity of AI-generated content?

    -Civility.AI acknowledges the need for standards and national models to ensure that AI-generated content is accurately represented and does not mislead users.

  • What is the role of the 200 universities in Civility.AI's operations?

    -The 200 universities collaborate with Civility.AI to stimulate academic innovation, which helps in the development of AI models but does not directly influence the commercial models offered by the company.

  • How does Civility.AI ensure that their AI models are not affected by the turnover or whims of their engineering force?

    -Civility.AI builds stable models that are not dependent on the turnover or decisions of the engineering force, ensuring continuity and reliability in their AI offerings.

Outlines

00:00

🌐 Open-Source AI and Data Customizability

The founder and CEO of Civility.AI discusses the company's commitment to remaining open-source, emphasizing the importance of customizability and safety. They highlight the limitations of current AI models due to data quality and the desire to create specialized models for different languages and purposes. The conversation touches on the challenges of copyright and the business model, with Civility.AI opting to remove 169 million images from their data center to respect intellectual property rights. The company's approach to open-source models is contrasted with other companies that do not disclose their data handling practices.

05:00

🤝 Collaboration and Transparency in AI Development

The discussion continues with the Civility.AI team's approach to AI development, which includes a collaboration with 200 universities. The company's business model is clarified, with an eight-digit revenue stream from their Bedrock service, which involves taking open models to clients' clouds and partnering with Amazon. The conversation addresses concerns about the stability of open-sourced models and the company's strategy to build stable models for commercial use while fostering academic innovation. The CEO also addresses claims of lack of clarity regarding the company's AI IP, emphasizing that Stability Models have 100% stability.

🔍 Transparency and Regulation in AI Training

The focus shifts to the transparency of AI training and the use of data sets. Civility.AI's commitment to openness is highlighted, with plans to demonstrate their models' training processes. The importance of transparency for regulatory considerations is discussed, as is the potential for AI to transform industries while posing risks that require better practices to mitigate. The conversation concludes with predictions about the direction of policy towards regulated entities and the need for national standards for AI-generated content.

Mindmap

Keywords

💡Civility.AI

Civility.AI is the company mentioned in the script, founded by the guest being interviewed. It represents the organization's commitment to developing AI technology with a focus on safety and customizability. The company's approach to AI generation is central to the discussion, highlighting the importance of ethical considerations in AI development.

💡AI Generated Images

AI generated images refer to visual content created by artificial intelligence algorithms, which can mimic or surpass human creativity. In the context of the video, these images are a product of Civility.AI's AI technology and are a point of discussion regarding their quality and potential copyright issues.

💡Data Sets

Data sets are collections of data used to train AI models. In the video, the quality of data sets is a concern, as they directly impact the performance of AI models. The company aims to improve these models by ensuring the data they use is of high quality and ethically sourced.

💡Open Source

Open source refers to a philosophy and practice of allowing others to view, use, modify, and distribute a work under certain licenses. In the video, the debate between open and closed models is highlighted, with the company advocating for open source to ensure privacy and control over data for users.

💡Copyright

Copyright is a legal right that protects original works of authorship. The video script discusses the potential copyright issues arising from AI-generated content, particularly when it involves recreating images that may have existing copyright protections.

💡Fair Use

Fair use is a legal doctrine that permits limited use of copyrighted material without permission from the rights holder. The script raises the question of whether using billions of images for AI training falls under fair use, indicating the complex legal considerations in AI development.

💡Transparency

Transparency in the context of AI refers to the openness and clarity about how AI models are trained and what data sets are used. The video emphasizes the importance of transparency to build trust and ensure the responsible development of AI technology.

💡Regulation

Regulation in the AI context refers to the establishment of rules and guidelines to govern the development and use of AI technologies. The video suggests that the policy direction may move towards more regulated entities to manage the risks and ensure responsible innovation in AI.

💡Deep Fakes

Deep fakes are AI-generated videos or audio that can convincingly replace or mimic real people's appearances and voices. The video script touches on the challenges of identifying AI-generated content, highlighting the need for standards and regulations to address the potential misuse of this technology.

💡National Standards

National standards are guidelines or specifications that are adopted by a country to ensure consistency and quality in various fields, including technology. The video discusses the need for national standards around AI-generated content to ensure its responsible use and to prevent misinformation.

Highlights

Civility.AI, founded by the guest, released several generation AI models last August, which are driving many avatar apps.

The company has 2 billion images but acknowledges that the datasets and models are not good enough due to data quality issues.

Civility.AI aims to improve safety and customizability by pushing their AI models into the open.

The company has about a trillion questions, indicating a large-scale data operation.

Civility.AI has addressed copyright concerns by opting out 169 million images from their data center.

The company differentiates itself by focusing on stable models and academic innovation.

Civility.AI has 78 full-time AI engineers and collaborates with 200 universities.

The company's business model involves taking open models to clients' clouds and participating in the upside with Amazon.

Civility.AI is a profitable business with eight-digit revenue.

The company is transparent about its training models and will show open training next week.

Civility.AI believes that open models are required for private data and there's a business model for that.

The company is working on video generation, aiming to create Hollywood-level movies live in the next few years.

Civility.AI is pushing for national standards around AI-generated content to address the issue of deep fakes.

The company sees a future where AI will transform industries but also acknowledges the need for better practices to curb dangerous outcomes.

Civility.AI's approach to AI is unique in the market and is only possible now.

The company is preparing for a regulated future for AI, balancing innovation and regulation.

Civility.AI's models are 100% stable, and they fund other projects for academic purposes.

The company is transparent about its IP and the stability of its models.

Civility.AI's models are trained with regulated data, ensuring full stability and transparency.