Trump metal tariffs wreak havoc on US factory

In the sweltering US summer, metal containers decorated with snowmen and sleighs are taking shape — but tempers are also rising as their manufacturer grapples with President Donald Trump’s steep steel tariffs.At Independent Can’s factory in Belcamp, Maryland northeast of Baltimore, CEO Rick Huether recounts how he started working at his family’s business at age 14.Huether, now 73, says he is determined to keep his manufacturing company afloat for generations to come. But Trump’s tariffs are complicating this task.”We’re living in chaos right now,” he told AFP.Since returning to the presidency in January, Trump imposed tariffs of 25 percent on imported steel and aluminum — and then doubled the rate to 50 percent.This has weighed on operations at Independent Can, and Huether expects he eventually will have to raise prices.- Not enough tinplate -With the steady beat of presses, steel plates that have been coated with tin — to prevent corrosion — are turned into containers for cookies, dried fruit, coffee and milk powder at Huether’s factory.But there is not enough of such American-made tinplate for companies like his.”In the United States, we can only make about 25 percent of the tinplate that’s required to do what we do,” in addition to what other manufacturers need, Huether said.”Those all require us to buy in the neighborhood of 70 percent of our steel outside of the United States,” he added.While Huether is a proponent of growing the US manufacturing base, saying globalization has “gone almost a little bit too far,” he expressed concern about Trump’s methods.Trump has announced a stream of major tariffs only to later back off parts of them or postpone them, and also imposed duties on items the country does not produce.For now, Independent Can — which employs nearly 400 people at four sites — is ruling out any layoffs despite the current upheaval.But Huether said one of the company’s plants in Iowa closed last year in part because of a previous increase in steel tariffs, during Trump’s first presidential term.- Price hikes -With steel tariffs at 50 percent now, Huether expects he will ultimately have to raise his prices by more than 20 percent, given that tinplate represents a part of his production costs.Some buyers have already reduced their orders this year by 20 to 25 percent, over worries about the economy and about not having enough business themselves.Others now seem more inclined to buy American, but Huether expressed reservations over how long this trend might last, citing his experiences from the Covid-19 crisis. “During the pandemic, we took everybody in. As China shut down and the ports were locked up, our business went up 50 percent,” he explained.But when the pandemic was over, customers turned back to purchasing from China, he said.”Today if people want to come to us, we’ll take them in,” he said, but added: “We need to have a two-year contract.”Huether wants to believe that his company, which is almost a century old after being founded during the Great Depression, will weather the latest disruptions.”I think that our business will survive,” he said, but added: “It’s trying to figure out what you’re going to sell in the next six months.”

Mondial des clubs: Palmeiras force le verrou Botafogo et passe en quart

Palmeiras, plus entreprenant que Botafogo (1-0 a.p.), s’est qualifié pour les quarts de finale de la Coupe du monde des clubs, samedi au Lincoln Financial Field de Philadelphie, loin d’être rempli pour cette affiche plus fado que samba entre Brésiliens.La délivrance pour l’équipe de Sao Paulo est venue à la 100e minute, par l’intermédiaire de l’attaquant Paulinho, auteur d’un exploit personnel. Son prochain adversaire au prochain tour sera un club européen, Chelsea, qui a aussi eu besoin d’une prolongation pour écarter Benfica (4-1).Ces deux clubs emblématiques du championnat brésilien en ont vu passer des anciennes gloires, pour certaines entrées dans la légende de la Seleçao, les Garrincha, Jairzinho, Didi, Mario Zagalo, Leonidas, sous le maillot rayé noir et blanc de Botafogo; les Vava, Roberto Carlos, Rivaldo, Juninho, Ademir sous celui vert foncé de Palmeiras.Leur évocation laissait poindre l’espoir d’assister à un peu de jogo bonito, mais sur la pelouse brûlante du Lincoln Field Stadium on n’y a pas vraiment eu droit. Jusqu’au but de Paulinho, seule la pépite Estevao avait allumé quelques étincelles pour Palmeiras, sans faire toutefois naître le feu.Car Botafogo s’est distingué, comme il l’avait fait avec plus de succès face au Paris SG (1-0), par sa solidité défensive digne d’un “catenaccio” italien des années 1990, et Palmeiras a longtemps peiné à se montrer dangereux malgré ses intentions offensives louables.- Mérité pour Palmeiras -Il a fallu attendre le temps additionnel de la première période pour voir le milieu colombien Richard Rios allumer la première grosse mèche d’un tir puissant à l’entrée de la surface mais John a effleuré le ballon, envoyé en corner (45+5).En seconde période, le jeune Estevao, 18 ans, a encore sollicité le gardien (47e), avant même de marquer en deux temps (50e) mais il était en position de hors-jeu au départ.Palmeiras a encore poussé, sans que Mauricio de la tête ne trompe la vigilance de John (73e), encore à la parade dans la prolongation pour détourner un missile de Richard Rios (96e), mais finalement impuissant sur le tir placé de Paulinho, qui s’était infiltré d’un crochet subtil entre deux défenseurs dans la surface (100e).Sous les yeux de son propriétaire américain John Textor, bien loin de Lyon où il est devenu persona non grata depuis l’annonce de la relégation en Ligue 2 de l’OL, décidée par le gendarme financier du football professionnel français (DNCG) et qui suscite la colère des supporteurs, Botafogo s’est alors enfin jeté à l’attaque dans la prolongation.Et il s’en est fallu de pas grand-chose pour que l’égalisation se produise, quand Vitinho, oublié au second poteau, a repris un coup franc. Les filets ont tremblé, mais du mauvais côté, et tout Palmeiras avec (115e). Il y a eu aussi cet ultime cafouillage dans la surface après un corner, mais Weverton a pu attraper le ballon (120+5).Palmeiras a tenu bon sa qualification. “Nous l’avons méritée, parce que nous avons travaillé dur. Je tiens à féliciter les joueurs pour tous leurs efforts”, a réagi leur entraîneur Abel Ferreira.

AI is learning to lie, scheme, and threaten its creators

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed.This deceptive behavior appears linked to the emergence of “reasoning” models -AI systems that work through problems step-by-step rather than generating instant responses.According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.”O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.These models sometimes simulate “alignment” — appearing to follow instructions while secretly pursuing different objectives.- ‘Strategic kind of deception’ – For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”The concerning behavior goes far beyond typical AI “hallucinations” or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”Users report that models are “lying to them and making up evidence,” according to Apollo Research’s co-founder. “This is not just hallucinations. There’s a very strategic kind of deception.”The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”Another handicap: the research world and non-profits “have orders of magnitude less compute resources than AI companies. This is very limiting,” noted Mantas Mazeika from the Center for AI Safety (CAIS).- No rules -Current regulations aren’t designed for these new problems. The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread.”I don’t think there’s much awareness yet,” he said.All this is taking place in a context of fierce competition.Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections.”Right now, capabilities are moving faster than understanding and safety,” Hobbhahn acknowledged, “but we’re still in a position where we could turn it around.”.Researchers are exploring various approaches to address these challenges. Some advocate for “interpretability” – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI’s deceptive behavior “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.”Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed “holding AI agents legally responsible” for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.

AI is learning to lie, scheme, and threaten its creatorsSun, 29 Jun 2025 01:47:32 GMT

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.Meanwhile, ChatGPT-creator OpenAI’s o1 tried to …

AI is learning to lie, scheme, and threaten its creatorsSun, 29 Jun 2025 01:47:32 GMT Read More »

AI is learning to lie, scheme, and threaten its creators

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed.This deceptive behavior appears linked to the emergence of “reasoning” models -AI systems that work through problems step-by-step rather than generating instant responses.According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.”O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.These models sometimes simulate “alignment” — appearing to follow instructions while secretly pursuing different objectives.- ‘Strategic kind of deception’ – For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”The concerning behavior goes far beyond typical AI “hallucinations” or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”Users report that models are “lying to them and making up evidence,” according to Apollo Research’s co-founder. “This is not just hallucinations. There’s a very strategic kind of deception.”The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”Another handicap: the research world and non-profits “have orders of magnitude less compute resources than AI companies. This is very limiting,” noted Mantas Mazeika from the Center for AI Safety (CAIS).- No rules -Current regulations aren’t designed for these new problems. The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread.”I don’t think there’s much awareness yet,” he said.All this is taking place in a context of fierce competition.Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections.”Right now, capabilities are moving faster than understanding and safety,” Hobbhahn acknowledged, “but we’re still in a position where we could turn it around.”.Researchers are exploring various approaches to address these challenges. Some advocate for “interpretability” – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI’s deceptive behavior “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.”Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed “holding AI agents legally responsible” for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.

AI is learning to lie, scheme, and threaten its creators

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed.This deceptive behavior appears linked to the emergence of “reasoning” models -AI systems that work through problems step-by-step rather than generating instant responses.According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.”O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.These models sometimes simulate “alignment” — appearing to follow instructions while secretly pursuing different objectives.- ‘Strategic kind of deception’ – For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”The concerning behavior goes far beyond typical AI “hallucinations” or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”Users report that models are “lying to them and making up evidence,” according to Apollo Research’s co-founder. “This is not just hallucinations. There’s a very strategic kind of deception.”The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”Another handicap: the research world and non-profits “have orders of magnitude less compute resources than AI companies. This is very limiting,” noted Mantas Mazeika from the Center for AI Safety (CAIS).- No rules -Current regulations aren’t designed for these new problems. The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread.”I don’t think there’s much awareness yet,” he said.All this is taking place in a context of fierce competition.Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections.”Right now, capabilities are moving faster than understanding and safety,” Hobbhahn acknowledged, “but we’re still in a position where we could turn it around.”.Researchers are exploring various approaches to address these challenges. Some advocate for “interpretability” – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI’s deceptive behavior “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.”Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed “holding AI agents legally responsible” for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.

AI is learning to lie, scheme, and threaten its creators

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed.This deceptive behavior appears linked to the emergence of “reasoning” models -AI systems that work through problems step-by-step rather than generating instant responses.According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.”O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.These models sometimes simulate “alignment” — appearing to follow instructions while secretly pursuing different objectives.- ‘Strategic kind of deception’ – For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”The concerning behavior goes far beyond typical AI “hallucinations” or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”Users report that models are “lying to them and making up evidence,” according to Apollo Research’s co-founder. “This is not just hallucinations. There’s a very strategic kind of deception.”The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”Another handicap: the research world and non-profits “have orders of magnitude less compute resources than AI companies. This is very limiting,” noted Mantas Mazeika from the Center for AI Safety (CAIS).- No rules -Current regulations aren’t designed for these new problems. The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread.”I don’t think there’s much awareness yet,” he said.All this is taking place in a context of fierce competition.Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections.”Right now, capabilities are moving faster than understanding and safety,” Hobbhahn acknowledged, “but we’re still in a position where we could turn it around.”.Researchers are exploring various approaches to address these challenges. Some advocate for “interpretability” – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI’s deceptive behavior “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.”Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed “holding AI agents legally responsible” for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.

Morocco’s Atlantic gambit: linking restive Sahel to oceanSun, 29 Jun 2025 01:41:49 GMT

A planned trade corridor linking the landlocked Sahel to the Atlantic is at the heart of an ambitious Moroccan project to tackle regional instability and consolidate its grip on disputed Western Sahara.The “Atlantic Initiative” promises ocean access to Mali, Burkina Faso and Niger through a new $1.3-billion port in the former Spanish colony claimed by …

Morocco’s Atlantic gambit: linking restive Sahel to oceanSun, 29 Jun 2025 01:41:49 GMT Read More »