Basketball
Add news
News

OpenAI, facing questions on Tumbler Ridge, told ministers it had referred another case to police

0 14

OTTAWA — Canada’s public safety minister says that OpenAI confirmed in a meeting with ministers there has been one case in Canada where it referred a user’s activities to police, as the company faces scrutiny for not doing so in the case of the Tumbler Ridge shooter.

Gary Anandasangaree told National Post in an interview that information was provided by company representatives, saying he did not have the specifics of the case.

“They advised that there’s one case that was referred in Canada,” he said on Wednesday, adding later that “we didn’t get into the details.”

“There’s been … one reported incident in Canada, where they have escalated to law enforcement,” the minister said.

A response from OpenAI about the case has not yet been returned. A source with knowledge of the discussions said the case happened this year and came about during a discussion of the U.S. company’s safety protocols.

Ministers were also informed that ChatGPT answers some 2.5 billion questions per day and that the number of cases that meet a threshold of concern “appears to be quite low,” Anandasangaree said.

Anandasangaree, like his federal colleagues who attended the meeting with the company’s safety representatives for more than an hour on Tuesday evening, echoed the government’s request that the company return with better safety measures.

The minister said that on the surface, the safeguards the company has in place “seem inadequate,” but do change with every version of ChatGPT that is released.

“What we’ve asked is for them to come up with a much more robust set of guidelines that will ensure situations such as this are targeted and are sent to the police.”

The meeting, arranged by Artificial Intelligence Minister Evan Solomon and attended by his federal colleagues representing justice and public safety, occurred with OpenAI safety representatives following a report in the Wall Street Journal that a ChatGPT account linked to Tumbler Ridge shooter Jesse Van Rootselaar had been flagged last June for activities that violated its policies. As a result Van Rootselaar was banned from the platform.

While details of what Van Rootselaar had exchanged with the chatbot have not been divulged, the company confirmed in an email last week that it had considered alerting Canadian police to what its internal detection systems had flagged, but ultimately did not after determining it did not meet the internal threshold to warrant a warning.

The company confirmed that it had informed the RCMP about Van Rootselaar’s activities after the Feb. 10 shooting.

Mounties in B.C. say that Van Rootselaar, who died from a self-inflicted gunshot injury, entered Tumbler Ridge Secondary School on Feb. 10, killing six people, five of them children and critically injuring another child. The shooter’s mother and half-brother were found dead in the family home later that day.

The tragedy is one of the worst mass shootings in Canadian history

Speaking to reporters on Wednesday, Solomon said that “of course a failure” occurred when the company decided against alerting police about the  flagged interaction between the 18-year-old Tumbler Ridge shooter and its chatbot.

Solomon said that he would not comment on details of the case, but said he left disappointed that OpenAI did not present any proposals to better enhance safety and expects the company to return with more solutions.

“Of course a failure occurred here …we want to ensure that this does not happen again,” he said on his way into the Liberals’ weekly caucus meeting on Wednesday.

“We were really disturbed by the reports that there might have been an opportunity to escalate this to law enforcement further, and we want to make sure if any company has that opportunity, they would escalate.”

Prime Minister Mark Carney said on Wednesday that he not yet been briefed on the meeting with OpenAI.

“Obviously, anything that anyone could have done to prevent that tragedy or future tragedies, must be done,” Carney said.

Solomon earlier in the week said he had requested the meeting with OpenAI to discuss the tech company’s safety policies and thresholds when it comes to escalating matters.

“We expected them when they came to not only give us details about their escalation thresholds and their safety protocols, but we expected them to come with some concrete solutions so Canadians can feel comfortable that this kind of tragedy may be avoided,” he said on Wednesday.

“We are disappointed that they did not provide any concrete proposals.”

OpenAI did not respond to a request for comment by publication time.

Justice Minister Sean Fraser said that “trust is going to be earned” by companies like OpenAI and that it depends on what changes the company adopts.

“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government’s going to be making changes.”

Solomon has said the government was open to looking at all options.

The incident with OpenAI comes as the Carney government considers options on how to introduce measures to better protect Canadians and in particular children, when it comes to online safety.

That effort is expected to be handled by Canadian Heritage Minister Marc Miller, who also attended Tuesday’s meeting with OpenAI.

National Post

Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our daily newsletter, Posted, here.

Comments

Комментарии для сайта Cackle
Загрузка...

More news:

Read on Sportsweek.org:

Other sports

Sponsored