Palestinar herriaren aurkako genozidio betean, 2024an, Sidenor enpresak altzairua saldu zion Israelgo armagintza enpresa bati. Poliziaren miaketak agerian utzi du gutxienez Sidenorreko zuzendarietako batek jakin bazekiela altzairua obusetarako erabiliko zela.

Feed icon
ARGIA
CC BY-SA🅭🅯🄎

Palestinar herriaren aurkako genozidio betean, 2024an, Sidenor enpresak altzairua saldu zion Israelgo armagintza enpresa bati. Poliziaren miaketak agerian utzi du gutxienez Sidenorreko zuzendarietako batek jakin bazekiela altzairua obusetarako erabiliko zela.

Sin duda, es un lugar común en el anime la elaboración de personajes masculinos que rozan la performance femenina, atrayendo tanto a mujeres como a varones dentro de la propia serie. Los casos abundan. Por mencionar solo dos: cerrando el siglo XX, Japón nos dio a Shun de Andrómeda y a otros personajes de Los caballeros del zodíaco; en la actualidad, nos da a Jinshi, de Los diarios de la boticaria. La entrada ¿Nunca viste un hombre hermoso? Masculinidad y feminidad en el anime: Shun de Andrómeda y Jinshi se publicó primero en La tinta.

Feed icon
La Tinta
CC BY-NC🅭🅯🄏

Sin duda, es un lugar común en el anime la elaboración de personajes masculinos que rozan la performance femenina, atrayendo tanto a mujeres como a varones dentro de la propia serie. Los casos abundan. Por mencionar solo dos: cerrando el siglo XX, Japón nos dio a Shun de Andrómeda y a otros personajes de Los caballeros del zodíaco; en la actualidad, nos da a Jinshi, de Los diarios de la boticaria. La entrada ¿Nunca viste un hombre hermoso? Masculinidad y feminidad en el anime: Shun de Andrómeda y Jinshi se publicó primero en La tinta.

На локално ниво видлива е концентрацијата на вработените во јавниот сектор според етнички клуч во зависност од етничката структура на општината. Етничките Македонци со 100 отсто се застапени во општините главно лоцирани на истокот и југоистокот, како што се Кривогаштани,…

Feed icon
Вистиномер
CC BY-ND🅭🅯⊜

На локално ниво видлива е концентрацијата на вработените во јавниот сектор според етнички клуч во зависност од етничката структура на општината. Етничките Македонци со 100 отсто се застапени во општините главно лоцирани на истокот и југоистокот, како што се Кривогаштани,…

26 minutes

Alabama Reflector
Feed icon

The Alabama House of Representatives Tuesday passed a bill that would create a new sales tax holiday. HB 360, sponsored by Rep. Chris Sells, R-Greenville, establishes a Second Amendment Sales Tax Holiday for the last weekend in August. “As crime increases, people want protection,” Sells said in the House chamber. “You probably have [owned] a […]

Feed icon
Alabama Reflector
CC BY-NC-ND🅭🅯🄏⊜

The Alabama House of Representatives Tuesday passed a bill that would create a new sales tax holiday. HB 360, sponsored by Rep. Chris Sells, R-Greenville, establishes a Second Amendment Sales Tax Holiday for the last weekend in August. “As crime increases, people want protection,” Sells said in the House chamber. “You probably have [owned] a […]

26 minutes

Nashville Banner
Feed icon

Emily Lamb announced her resignation as chair of the East Bank Development Authority to take a position running the Metro Codes Department, and was succeeded by Mona Hodge, while Trisha Herzfeld was appointed to fill the vacancy on the board. The post Mayor Taps East Bank Board Chair Lamb to Lead Metro Codes appeared first on Nashville Banner.

Feed icon
Nashville Banner
CC BY-NC-ND🅭🅯🄏⊜

Emily Lamb announced her resignation as chair of the East Bank Development Authority to take a position running the Metro Codes Department, and was succeeded by Mona Hodge, while Trisha Herzfeld was appointed to fill the vacancy on the board. The post Mayor Taps East Bank Board Chair Lamb to Lead Metro Codes appeared first on Nashville Banner.

26 minutes

Washington State Standard
Feed icon

An impending tax in Washington state targeting private jets could be grounded just days before its scheduled takeoff. The Democratically-controlled Washington Legislature approved the new luxury tax last spring. State Sen. Marko Liias, D-Edmonds, who sponsored the tax, is now in the unusual position of trying to repeal his own prior policy. A separate repeal […]

Feed icon
Washington State Standard
CC BY-NC-ND🅭🅯🄏⊜

An impending tax in Washington state targeting private jets could be grounded just days before its scheduled takeoff. The Democratically-controlled Washington Legislature approved the new luxury tax last spring. State Sen. Marko Liias, D-Edmonds, who sponsored the tax, is now in the unusual position of trying to repeal his own prior policy. A separate repeal […]

26 minutes

Indiana Capital Chronicle
Feed icon

Sports journalism is important to both news consumers and to newsrooms. Rooting for a team or an athlete allows us to experience stories of winning and losing that we can share with our neighbors. Sports is a window to the broader world and sometimes to ourselves and our own communities.   Even though the Winter Olympics, […]

Feed icon
Indiana Capital Chronicle
CC BY-NC-ND🅭🅯🄏⊜

Sports journalism is important to both news consumers and to newsrooms. Rooting for a team or an athlete allows us to experience stories of winning and losing that we can share with our neighbors. Sports is a window to the broader world and sometimes to ourselves and our own communities.   Even though the Winter Olympics, […]

26 minutes

Indiana Capital Chronicle
Feed icon

A narrowly divided vote to roll back portions of Indiana’s environmental code — plus a high-profile bid to lure the Chicago Bears across the state line — anchored a deadline-day push Tuesday as the Indiana House advanced a slate of bills and set up end-of-session negotiations across the rotunda. Lawmakers also narrowly approved a controversial […]

Feed icon
Indiana Capital Chronicle
CC BY-NC-ND🅭🅯🄏⊜

A narrowly divided vote to roll back portions of Indiana’s environmental code — plus a high-profile bid to lure the Chicago Bears across the state line — anchored a deadline-day push Tuesday as the Indiana House advanced a slate of bills and set up end-of-session negotiations across the rotunda. Lawmakers also narrowly approved a controversial […]

Los resultados se basan en el análisis de más de 700 000 estimaciones de cambio de biomasa de casi 34 mil poblaciones de peces entre 1993 y 2021. Para que la gestión pesquera sea eficaz los planes deben ser internacionales y tener en cuenta la pérdida de biomasa a largo plazo.

Feed icon
SINC
CC BY🅭🅯

Los resultados se basan en el análisis de más de 700 000 estimaciones de cambio de biomasa de casi 34 mil poblaciones de peces entre 1993 y 2021. Para que la gestión pesquera sea eficaz los planes deben ser internacionales y tener en cuenta la pérdida de biomasa a largo plazo.

ویژه برنامه سخنرانی «وضعیت کشور» رئیس جمهوری آمریکا در کنگره آمریکا

Feed icon
صدای آمریکا
Public Domain

ویژه برنامه سخنرانی «وضعیت کشور» رئیس جمهوری آمریکا در کنگره آمریکا

27 minutes

Healthbeat
Feed icon

Public health, explained: Sign up to receive Dr. Jay K. Varma’s reports in your inbox a day early. Hello and welcome to Healthbeat’s weekly report on stories shaping public health in the United States. I am Dr. Jay K. Varma, a physician, epidemiologist, and public health expert currently serving as chief medical officer at Fedcap, a national nonprofit focused on economic mobility and well-being for vulnerable communities. Views expressed here are my own. This week, I’m focusing on a question that sounds like science fiction but is increasingly being asked in serious policy circles: Will artificial intelligence kill us all? AI can help health agencies avert illness and death Let’s start with my bias. I believe that AI is a transformative tool that can help public health agencies avert illness and death in their communities if used responsibly and ethically. It can help epidemiologists process, analyze, and interpret surveillance data. It can assist health officials in tailoring communications, particularly in emergencies, and improving programs for immunizations, sexually transmitted infections, tuberculosis, and maternal child health. As AI systems grow more capable, many experts are wondering, however, whether the risks could outweigh the benefits. Some of those risks are distant but catastrophic. Others are happening right now at a smaller scale. An AI safety leader warns the ‘world is in peril’ Earlier this month, a senior safeguards researcher at the AI company Anthropic resigned, warning that the “world is in peril.” In his resignation letter, he cited concerns about AI, bioweapons, and interconnected global crises. Anthropic has positioned itself as a safety-oriented firm in the race to build increasingly powerful generative AI systems. The New Yorker just published an in-depth investigation into Anthropic and the inherent tensions people in the company feel trying to build advanced AI systems while safeguarding against risks to humanity. Curiously, the U.S. Defense Department just took the view that Anthropic may be a risk to the U.S. government, because it’s too focused on ethics and protecting human health. (Full disclosure: I’m a big fan of Claude Code, one of Anthropic’s premier products. Read an excellent review of that product by widely respected AI expert Ethan Mollick here.) The departing Anthropic researcher had led work on reducing risks from AI-assisted bioterrorism and on understanding how AI assistants could distort human judgment. His concerns are shared by others in the field. As AI models become more proficient in synthesizing scientific literature and solving technical problems, they may lower the barrier to designing or modifying pathogens. Biology is already digitized. Genome sequences are publicly available. Laboratory protocols are published in journals and databases. Recent research has tested advanced AI models on graduate-level virology problems, demonstrating that they can perform at or near expert level on tasks involving viral genetics and pathogenesis. While such capabilities can help humanity by improving vaccine design or antiviral discovery, they could also be misused to enhance the infectiousness or immune evasion of pathogens. Having led outbreak responses in Asia, Africa, and the United States, I still believe that we need to worry most about nature — “spillover” of viruses from animals into humans — when trying to prevent the next pandemic. Nevertheless, the pace of AI development is extraordinary, which means my risk assessment is changing daily as well. An intentionally engineered pathogen designed for maximum spread could have far greater consequences than even the Covid pandemic. Proposals to reduce biosecurity risks Researchers and policymakers have outlined approaches to mitigate risk. One proposal is to secure closed-source biological AI models, restricting access and subjecting users to rigorous vetting and monitoring. Companies can conduct exercises in which they deliberately probe their own systems for vulnerabilities before malicious actors do. A second approach is to protect high-risk biological datasets from being used to fine-tune open-source models. If detailed genomic data on high-consequence pathogens or advanced laboratory protocols are easily accessible for model training, they can amplify the capabilities of otherwise general-purpose systems. Tighter access controls and clearer publication norms could reduce misuse. A third area is to restrict AI agents that interface with biological tools. As AI systems move beyond generating text to interacting with laboratory equipment or ordering synthetic DNA, systems need to be in pace to ensure there is a human that is screening DNA synthesis orders and auditing AI-designed workflows. In the AI era, biosecurity will require multiple layers of defense that protect humanity at the level of algorithms, data, and laboratory infrastructure. The near-term mental health risks While headlines often focus on existential threats, I am most concerned at the moment about a more immediate hazard: AI’s impact on mental health and suicide. Millions of people now interact daily with AI “companion” chatbots designed to simulate friendship, empathy, or therapeutic dialogue. Some users, including minors, disclose suicidal thoughts to these systems. In multiple lawsuits, there are allegations that AI chatbot interactions led to people committing suicide. In response, California enacted Senate Bill 243 in 2025, the first law in the nation regulating AI companion chatbots. The law requires clear disclosure when a user is interacting with AI rather than a human, mandates protocols for responding to suicidal ideation, and imposes additional rules for known minors, including periodic reminders that the chatbot is not human, and restrictions on sexually explicit content. It also creates a private right of action, allowing civil lawsuits for violations. California’s law is an important milestone. It acknowledges that emotionally realistic AI systems can foster attachment and dependence, particularly among adolescents. It also recognizes that AI systems need regulation to ensure they do not severely harm mental health and promote suicidal thinking. We do not know if these regulations, however, will even work. Disclosure that a system is “not human” may not counteract emotional realism. Protocol requirements do not guarantee effectiveness. Operators may avoid collecting information about users’ ages, limiting their obligations to minors. From a public health perspective, these debates remind me of the challenges with social media platforms. Almost 16 years after Instagram was released, we are now seeing possible the company be held legally liable for its impact on mental health. Last week, Mark Zuckerberg, CEO of Meta, had to testify in court about whether Instagram (which Meta owns) was designed to addict and harm teenagers. Social media technologies optimized for engagement have amplified loneliness, misinformation, and depression, and AI systems could likely do the same at an ever larger scale. Evaluating the uncertainty So back to the original question: Will AI kill us all? I believe the probability of an extinction-level event is low today, but low is not zero, and the consequences would be catastrophic. At the same time, the mental health risks of emotionally persuasive AI systems have already appeared and need to be addressed. Public health agencies must always prepare for rare catastrophic events while addressing everyday harms. Now, with AI, we must do both simultaneously as well. Until next week, Jay Dr. Jay K. Varma, who is recognized globally for his leadership in the prevention and control of infectious disease, writes about public health for Healthbeat. He has guided epidemic responses, developed policies, and implemented programs that have saved lives across Asia, Africa, and the United States. He is based in New York. Contact Jay at jvarma@healthbeat.org.

Feed icon
Healthbeat
CC BY-NC-ND🅭🅯🄏⊜

Public health, explained: Sign up to receive Dr. Jay K. Varma’s reports in your inbox a day early. Hello and welcome to Healthbeat’s weekly report on stories shaping public health in the United States. I am Dr. Jay K. Varma, a physician, epidemiologist, and public health expert currently serving as chief medical officer at Fedcap, a national nonprofit focused on economic mobility and well-being for vulnerable communities. Views expressed here are my own. This week, I’m focusing on a question that sounds like science fiction but is increasingly being asked in serious policy circles: Will artificial intelligence kill us all? AI can help health agencies avert illness and death Let’s start with my bias. I believe that AI is a transformative tool that can help public health agencies avert illness and death in their communities if used responsibly and ethically. It can help epidemiologists process, analyze, and interpret surveillance data. It can assist health officials in tailoring communications, particularly in emergencies, and improving programs for immunizations, sexually transmitted infections, tuberculosis, and maternal child health. As AI systems grow more capable, many experts are wondering, however, whether the risks could outweigh the benefits. Some of those risks are distant but catastrophic. Others are happening right now at a smaller scale. An AI safety leader warns the ‘world is in peril’ Earlier this month, a senior safeguards researcher at the AI company Anthropic resigned, warning that the “world is in peril.” In his resignation letter, he cited concerns about AI, bioweapons, and interconnected global crises. Anthropic has positioned itself as a safety-oriented firm in the race to build increasingly powerful generative AI systems. The New Yorker just published an in-depth investigation into Anthropic and the inherent tensions people in the company feel trying to build advanced AI systems while safeguarding against risks to humanity. Curiously, the U.S. Defense Department just took the view that Anthropic may be a risk to the U.S. government, because it’s too focused on ethics and protecting human health. (Full disclosure: I’m a big fan of Claude Code, one of Anthropic’s premier products. Read an excellent review of that product by widely respected AI expert Ethan Mollick here.) The departing Anthropic researcher had led work on reducing risks from AI-assisted bioterrorism and on understanding how AI assistants could distort human judgment. His concerns are shared by others in the field. As AI models become more proficient in synthesizing scientific literature and solving technical problems, they may lower the barrier to designing or modifying pathogens. Biology is already digitized. Genome sequences are publicly available. Laboratory protocols are published in journals and databases. Recent research has tested advanced AI models on graduate-level virology problems, demonstrating that they can perform at or near expert level on tasks involving viral genetics and pathogenesis. While such capabilities can help humanity by improving vaccine design or antiviral discovery, they could also be misused to enhance the infectiousness or immune evasion of pathogens. Having led outbreak responses in Asia, Africa, and the United States, I still believe that we need to worry most about nature — “spillover” of viruses from animals into humans — when trying to prevent the next pandemic. Nevertheless, the pace of AI development is extraordinary, which means my risk assessment is changing daily as well. An intentionally engineered pathogen designed for maximum spread could have far greater consequences than even the Covid pandemic. Proposals to reduce biosecurity risks Researchers and policymakers have outlined approaches to mitigate risk. One proposal is to secure closed-source biological AI models, restricting access and subjecting users to rigorous vetting and monitoring. Companies can conduct exercises in which they deliberately probe their own systems for vulnerabilities before malicious actors do. A second approach is to protect high-risk biological datasets from being used to fine-tune open-source models. If detailed genomic data on high-consequence pathogens or advanced laboratory protocols are easily accessible for model training, they can amplify the capabilities of otherwise general-purpose systems. Tighter access controls and clearer publication norms could reduce misuse. A third area is to restrict AI agents that interface with biological tools. As AI systems move beyond generating text to interacting with laboratory equipment or ordering synthetic DNA, systems need to be in pace to ensure there is a human that is screening DNA synthesis orders and auditing AI-designed workflows. In the AI era, biosecurity will require multiple layers of defense that protect humanity at the level of algorithms, data, and laboratory infrastructure. The near-term mental health risks While headlines often focus on existential threats, I am most concerned at the moment about a more immediate hazard: AI’s impact on mental health and suicide. Millions of people now interact daily with AI “companion” chatbots designed to simulate friendship, empathy, or therapeutic dialogue. Some users, including minors, disclose suicidal thoughts to these systems. In multiple lawsuits, there are allegations that AI chatbot interactions led to people committing suicide. In response, California enacted Senate Bill 243 in 2025, the first law in the nation regulating AI companion chatbots. The law requires clear disclosure when a user is interacting with AI rather than a human, mandates protocols for responding to suicidal ideation, and imposes additional rules for known minors, including periodic reminders that the chatbot is not human, and restrictions on sexually explicit content. It also creates a private right of action, allowing civil lawsuits for violations. California’s law is an important milestone. It acknowledges that emotionally realistic AI systems can foster attachment and dependence, particularly among adolescents. It also recognizes that AI systems need regulation to ensure they do not severely harm mental health and promote suicidal thinking. We do not know if these regulations, however, will even work. Disclosure that a system is “not human” may not counteract emotional realism. Protocol requirements do not guarantee effectiveness. Operators may avoid collecting information about users’ ages, limiting their obligations to minors. From a public health perspective, these debates remind me of the challenges with social media platforms. Almost 16 years after Instagram was released, we are now seeing possible the company be held legally liable for its impact on mental health. Last week, Mark Zuckerberg, CEO of Meta, had to testify in court about whether Instagram (which Meta owns) was designed to addict and harm teenagers. Social media technologies optimized for engagement have amplified loneliness, misinformation, and depression, and AI systems could likely do the same at an ever larger scale. Evaluating the uncertainty So back to the original question: Will AI kill us all? I believe the probability of an extinction-level event is low today, but low is not zero, and the consequences would be catastrophic. At the same time, the mental health risks of emotionally persuasive AI systems have already appeared and need to be addressed. Public health agencies must always prepare for rare catastrophic events while addressing everyday harms. Now, with AI, we must do both simultaneously as well. Until next week, Jay Dr. Jay K. Varma, who is recognized globally for his leadership in the prevention and control of infectious disease, writes about public health for Healthbeat. He has guided epidemic responses, developed policies, and implemented programs that have saved lives across Asia, Africa, and the United States. He is based in New York. Contact Jay at jvarma@healthbeat.org.

What was originally Jefferson-Moore High School will soon be razed for downtown development. The city wants to hear your stories. The post A riverside school known by many names will be gone – but not forgotten. appeared first on The Waco Bridge.

Feed icon
The Waco Bridge
Attribution+

What was originally Jefferson-Moore High School will soon be razed for downtown development. The city wants to hear your stories. The post A riverside school known by many names will be gone – but not forgotten. appeared first on The Waco Bridge.

Oklahoma's student outcomes have fallen to near the bottom nationally, ranking 48th on the Nation's Report Card after dropping from the top half in the 1990s. Researchers point to states like Mississippi, which climbed from last to middle, as models for the literacy and math reforms Oklahoma is now considering. The post From Top Half to Near Last: How Oklahoma’s Schools Lost Three Decades of Ground and What Can Be Learned from Mississippi appeared first on Oklahoma Watch.

Feed icon
OklahomaWatch.org
Attribution+

Oklahoma's student outcomes have fallen to near the bottom nationally, ranking 48th on the Nation's Report Card after dropping from the top half in the 1990s. Researchers point to states like Mississippi, which climbed from last to middle, as models for the literacy and math reforms Oklahoma is now considering. The post From Top Half to Near Last: How Oklahoma’s Schools Lost Three Decades of Ground and What Can Be Learned from Mississippi appeared first on Oklahoma Watch.

Facing a similar labor shortage, Canada is recruiting health care workers from the United States. Could this pull workers from Kansas City? The post For disillusioned health care workers in Kansas City, Canada beckons appeared first on The Beacon.

Feed icon
The Beacon
CC BY-ND🅭🅯⊜

Facing a similar labor shortage, Canada is recruiting health care workers from the United States. Could this pull workers from Kansas City? The post For disillusioned health care workers in Kansas City, Canada beckons appeared first on The Beacon.

27 minutes

Mississippi Today
Feed icon

State, Ole Miss and Southern Miss are off to amazing starts in the young college baseball season. So much to discuss, including the Golden Eagles’ impressive seep through the prestigious Round Rock Classic.

Feed icon
Mississippi Today
CC BY-NC-ND🅭🅯🄏⊜

State, Ole Miss and Southern Miss are off to amazing starts in the young college baseball season. So much to discuss, including the Golden Eagles’ impressive seep through the prestigious Round Rock Classic.

27 minutes

Verite
Feed icon

Kid Thomas Valentine, or simply “Kid Thomas,” was known as “the last of the rough house trumpet players.”

Feed icon
Verite
CC BY-NC-ND🅭🅯🄏⊜

Kid Thomas Valentine, or simply “Kid Thomas,” was known as “the last of the rough house trumpet players.”

The newsrooms will deliver more bite-sized responses to yes/no questions in the lead-up to the 2026 election. Wisconsin Watch partners with Milwaukee Journal Sentinel to produce more Fact Briefs is a post from Wisconsin Watch, a non-profit investigative news site covering Wisconsin since 2009. Please consider making a contribution to support our journalism.

Feed icon
Wisconsin Watch
CC BY-ND🅭🅯⊜

The newsrooms will deliver more bite-sized responses to yes/no questions in the lead-up to the 2026 election. Wisconsin Watch partners with Milwaukee Journal Sentinel to produce more Fact Briefs is a post from Wisconsin Watch, a non-profit investigative news site covering Wisconsin since 2009. Please consider making a contribution to support our journalism.

La vicepresidenta segunda del Gobierno y ministra de Trabajo ha comunicado que no concurrirá como candidata en las elecciones generales previstas para 2027. La decisión, trasladada mediante una carta pública, supone el cierre anticipado de su etapa electoral.

Feed icon
Mundiario
CC BY-SA🅭🅯🄎

La vicepresidenta segunda del Gobierno y ministra de Trabajo ha comunicado que no concurrirá como candidata en las elecciones generales previstas para 2027. La decisión, trasladada mediante una carta pública, supone el cierre anticipado de su etapa electoral.

28 minutes

New Jersey Monitor
Feed icon

A new migrant detention center planned for Roxbury has become a major issue in the 7th Congressional District, where Rep. Tom Kean Jr. (R) is seeking reelection in November.

Feed icon
New Jersey Monitor
CC BY-NC-ND🅭🅯🄏⊜

A new migrant detention center planned for Roxbury has become a major issue in the 7th Congressional District, where Rep. Tom Kean Jr. (R) is seeking reelection in November.

Türkmenistanyň çäginde gurlan ilkinji karbamid önümçilik desgasy bolan 'Tejenkarbamid' zawodynda häzirki wagtda alnyp barylýan abatlaýyş işlerinde korrupsiýa ýüz urulýandygy barada maglumatlar gowuşýar.

Feed icon
Azat Ýewropa we Azatlyk Radiosy
Attribution+

Türkmenistanyň çäginde gurlan ilkinji karbamid önümçilik desgasy bolan 'Tejenkarbamid' zawodynda häzirki wagtda alnyp barylýan abatlaýyş işlerinde korrupsiýa ýüz urulýandygy barada maglumatlar gowuşýar.