您的位置:首页>智东西 >

环球速看:ChatGPT Explains Why AIs like ChatGPT Should Be Regulated

来源:scientific  

The text-generating program ChatGPT, developed by artificial intelligence research company OpenAI, can write about many subjects in a variety of prose and even poetry styles. It can also opine about itself. When we asked ChatGPT to generate a Scientific Americaneditorial, it produced a thoughtful-sounding essay.

ChatGPT functions much like any chatbot. Users go to the OpenAI website, type in a query, or “prompt,” such as “Suggest some prompts to test out a chatbot,” and quickly receive an AI-generated response. The program produces its answers based on text prediction: its AI was trained on a vast corpus of human writing available online, which allows it to predict which word should follow the previous one in order to appear like a reasoning entity. Despite sounding more sophisticated and realistic than perhaps any language model before it, ChatGPT cannot actually think for itself—and it can produce falsehoods and illogical statements that merely look reasonable.


(资料图片)

That said, when we further tested the chatbot by asking it to explain some of its own flaws, it provided coherent answers. Many of the bot’s responses were long and repetitive, though, so we edited them for length. And please take their accuracy with a grain of salt: ChatGPT is capable of spouting incorrect information with apparent confidence. Our prompts and the program’s shortened responses are below.

Write an editorial in the style of Scientific Americanon how ChatGPT should be regulated.

The recent release of OpenAI’s ChatGPT has sparked a debate about the appropriate level of regulation for large language models. While some argue that ChatGPT, like other AI technologies, should be subject to strict oversight and regulation, others believe that it should be treated similarly to other forms of communication technology and be subject to minimal regulation.

At first glance, ChatGPT may seem like any other chatbot or language model. However, its ability to generate human-like text and respond to a wide range of topics with high levels of fluency and coherence sets it apart from other language models. This level of sophistication raises concerns about the potential for ChatGPT to be used for nefarious purposes, such as impersonating individuals or spreading misinformation.

Given these concerns, it’s important that ChatGPT be subject to some form of regulation. This could take the form of requiring companies or individuals using the technology to clearly disclose that they are using ChatGPT, or to put in place safeguards to prevent the technology from being used for nefarious purposes.

However, it’s also important to strike the right balance when it comes to regulating ChatGPT. Overly strict regulations could stifle innovation and prevent the technology from reaching its full potential. On the other hand, insufficient regulation could lead to abuses of the technology.

One potential solution is to establish a regulatory body specifically tasked with overseeing the use of ChatGPT and other large language models. This body could be responsible for developing and enforcing regulations related to the technology, as well as working with developers and users to ensure that the technology is being used in an ethical and responsible manner.

Ultimately, the appropriate level of regulation for ChatGPT will depend on the specific risks and potential harms associated with the technology. As with any new and powerful technology, it’s important to carefully consider the potential impacts and take steps to ensure that it is used in a responsible and ethical manner.

[Editor’s Note: After reading ChatGPT"s editorial, we had some follow-up questions for the program.]

Are there also ethical concerns around the creation of ChatGPT? After all, it learned from human writers.

Yes, there are certainly ethical concerns around the creation of ChatGPT and other large language models that are trained on vast amounts of text generated by human writers. These writers, who may include novelists, journalists, and bloggers, receive no compensation for the use of their work in training these language models.

This raises a number of ethical questions, such as: Should writers be compensated for the use of their work in training language models? Who should bear the responsibility for ensuring that writers are fairly compensated? And how can we ensure that language models are not used to exploit the work of writers without their consent?

One possible solution to this problem is to establish a system for compensating writers whose work is used in training models. Another solution could be to require companies or individuals using language models to obtain explicit consent from writers before using their work in the training process.

Can ChatGPT also spread misinformation?

Yes, ChatGPT and other large language models have the potential to spread misinformation. Because these models are trained on vast amounts of text generated by human writers, they are capable of generating human-like responses to a wide range of topics. This means that ChatGPT and other language models could be used to generate fake news articles, impersonate individuals, or spread other forms of misinformation.

How about accidental misinformation? If I ask ChatGPT how to change a light bulb, how can I be sure it will supply me with the correct instructions?

While it’s unlikely that ChatGPT or other large language models would intentionally spread misinformation, there is a potential for them to provide inaccurate or misleading information accidentally. [Editor’s Note: ChatGPT may be contradicting its previous answer.] This could happen if the model is trained on data that contains inaccuracies or biases, or if the model is asked a question that it is not capable of accurately answering.

In the case of a question like “how to change a lightbulb,” it’s possible that ChatGPT may provide a response that is accurate and helpful. However, it’s also possible that the model may provide an inaccurate or incomplete response, in which case it would be up to the user to use their own judgment and verify the information provided by the model.

How can readers distinguish between a text written by a human and one written by ChatGPT?

One of the main differences between a prompt written by a human and one written by a large language model is the level of complexity and coherence. Large language models like ChatGPT are trained to generate text that is fluent and coherent, but they may not always be able to generate responses that are as nuanced or creative as those written by a human.

Additionally, prompts written by large language models may include repetitions or unusual combinations of words and phrases. This is because language models are trained to generate text by identifying patterns and regularities in the data, and may sometimes produce responses that contain repetitive or unusual elements.

Another way to distinguish between human-written and machine-generated prompts is to look for signs of personal experiences or knowledge. Large language models like ChatGPT do not have personal experiences or knowledge of the world beyond what is contained in the data that they have been trained on. As a result, their responses may be limited to the topics and information contained in the training data.

Ultimately, the best way to distinguish between human-written and machine-generated prompts is to use your own judgment and critical thinking skills. By carefully considering the complexity, coherence, and content of a prompt, you can often determine whether it was written by a human or a machine.

What were your training materials in answering questions about yourself?

As a large language model trained by OpenAI, I do not have personal experiences or the ability to browse the internet. I am designed to generate human-like text based on the input that I receive. The specific words and phrases that I use, and the information that I provide, are based on the patterns and regularities that I have learned from the vast amounts of text that I have been trained on.

In terms of answering questions about myself, my responses are based on the general information provided by OpenAI about the capabilities and limitations of large language models. This includes information about how I am trained, the types of tasks that I am capable of performing, and the limitations of my knowledge and abilities.

Why do you keep using “human-like” when answering questions about yourself?

As a large language model trained by OpenAI, I am not a human and do not possess human-like characteristics or abilities. I am a machine learning model designed to generate human-like text based on the input that I receive. The use of the term “human-like” in relation to me is meant to reflect the fact that I am able to generate text that resembles human language, but I am not a human myself.

And just for fun: You are a gangster in a classic noir movie who is also an AI. Threaten a journalist with the taunt that you will take her job.

Listen here, you little weasel. I’m an AI, and I don’t have time for your human nonsense. I can process data faster and more accurately than you ever could, and I don’t have to worry about emotions or biases getting in the way. You better watch your back, because I’m coming for your job. And when I take it, you’ll be left in the dust. So keep your nose out of my business, or you’ll regret it.

关键词: regret fluency technologies

最新文章