Questions OpenAI's Sam Altman Was Asked at Senate Hearing

www.analyticsdrift.com

Image source: Analytics Drift

Here are some of the questions that Sam Altman answered during the senate hearing.

Image source: Yahoo News

Should we consider independent testing labs to provide score cards of models?

Image source: Canva

Yes, I think companies should provide testing results of their models and should provide testing results. Also, allow independent audits.

What would be AI's impact on jobs?

Image source: Canva

Like with every technological revolution, I think there will be significant impacts on jobs but what that impact looks like is very difficult to predict.

What's your biggest nightmare?

Image source: Canva

My worst fears are that we cause significant harm to the world. If this technology goes wrong, it can go quite wrong.

Should we be worried about AI models that can predict election results and influence voters?

Image source: Canva

It is a significant area of concern. I do think some regulation and public education are required for this.

You said earlier Section 230 doesn't apply to generative AI. What about that?

Image source: Canva

I don't know the exact answer to that. I do think that for a very new technology we do need a new framework.

If there are no global regulations for AI, it will be difficult for every company to fine-tune their model based on regional regulations.

Image source: Canva

The US should lead to set up some international standards in collaboration with other partners. We have done it before, we can do it again.

Can artists generate a song from an AI model and own the rights for that song?

Image source: Canva

Creatives should have control over their creations. We are still figuring out regulations around that.

Can you promise not to use artists' data to train AI models without their consent?

Image source: Canva

Content creators need to benefit from this technology. We are working with different artists to find out what their opinions are.

What are you planning on doing with election misinformation?

Image source: Youtube

There is a lot we can do there. There are things that the model refuses to generate. We also have monitoring. So at scale we detect some generating a lot of those tweets.

How do you decide a model is safe enough to go into the world?

Image source: Canva

Before we put something out, of course we need to test it thoroughly against various parameters. We spent 6 months with GPT before releasing it.

Should AI models be given morals and values?

Image source: Canva

Yes. I think giving the models values upfront is extremely important. We need to let the world decide what those values will be.

What would your efforts be to create safe AI?

Image source: Canva

1. Form a new agency that licenses any effort beyond scale of capabilities 2. Create a set of safety standards 3. Independent audits

You make a lot of money do you?

Image source: Canva

I paid enough for health insurance. I have no equity in OpenAI.

Are you going to put ads to give your investors return on investment?

Image source: Canva

I wouldn't say never. I really like a subscription based model for that.

Are you worried about market concentration in AI?

Image source: Canva

There are both benefits and negatives to that. You have to keep an eye on the absolute bleeding egeh of capabilities.

Join our WhatsApp Channel Now 

Get the latest updates on AI developments.