Search Posts


US shoots down UFO over Canada
Is this article about Aerospace?
Canadian Prime Minister Justin Trudeau said the U.S. shot down an unidentified flying object over the Yukon, Canada at his request.



We leaves Earth because it is depraved of resources, but actually the aliens found the resources valuable to them on Earth so they colonizes it. But we humans need to return and reclaim our homeland right? How can we let aliens do anything they want?

submitted by /u/basafish
[link] [comments]
AI social progression
Is this article about Future of Work?

As AI and automaton continues to take more jobs, if we project that into the future a few decades, two views I understand predict our societal future: one is a utopia where people are free to do what they please, making art, enjoying leisure, while robots do all the hard work. The other view is a dystopia where the makers of the automation are mega rich while everyone out of work is mega poor with no middle class. The latter view is kind of where we have been headed the last few decades (increased income inequality) and I wonder if anyone has talked about what needs to happen to steer society in the direction of the first view?

submitted by /u/senrnariz
[link] [comments]
The Best Treadmills of 2023

With more and more people building home gyms instead of joining fitness centers, treadmills are among the smartest purchases for exercise enthusiasts or anyone looking to get back into shape.

There are considerable benefits to putting a treadmill in your living room, office, or basement. Treadmills are easy to use, easy to maintain, and provide a sweat-inducing workout in a short amount of time. And, of course, there's the time-saving aspect of having a treadmill at home; eliminating the trip going back and forth to the gym. Here are some tips for shopping for one of the best treadmills, along with our recommendations.

 Best Overall: NordicTrack T Series 6.5
— Best for Home Office: UREVO 2-in-1 Under Desk Treadmill
— Best Smart: Echelon Stride
 Best for Walking: Sunny Health & Fitness SF-T1407M
— Best Budget: Xterra TR150 Folding Treadmill

How We Selected the Best Treadmills

A treadmill was my gateway to a life of running. My parents bought a treadmill for our basement when I was in high school, and I logged countless hours just walking on the machine. It was instrumental in helping me keep in shape when I was home on winter breaks from college and away. I used a treadmill while training for my first race to teach myself what a specific pace felt like. In coming up with this list, I surveyed the market and considered user reviews. This list of the best treadmills for your home gym is based on three criteria.

Budget: Treadmills vary in price primarily due to the extra features the machine offers. This list has less expensive treadmill options for people on a budget and high-end machines that a person would find in a commercial gym.

Variety: Some of the models are incredibly basic, with few bells and whistles. One or two of the treadmills offer so much technology the machine might even be smart enough to do your taxes

Exercise Objectives: A few treadmills on this list are explicitly designed for hardcore runners or anyone in training for a specific event. Some of the treadmills on the list were chosen specifically with new runners in mind. One of the treadmills on this is specifically for walkers.

Best Overall: NordicTrack T Series 6.5

The NordicTrack T Series 6.5 is the best treadmill overall.

Why It Made the Cut: A fitness-center-caliber workout machine right in your own home. The impressive decline and incline capabilities of the Nordictrack T Series 6.5 will challenge every runner, while the FlexSelect cushioning of the running belt will soften the impact on the joints.

— Dimensions: 73 x 35.8 x 67.5 inches
— Weight: 203 pounds
— Maximum Speed: 10 mph

— 30-Day iFIT Membership included
— Space-saving design
— Folds for easy storage

— Membership isn't included in purchase
— Extremely heavy to move

You know that treadmill at the gym that you love? Now you can have it in your house. The NordicTrack T 6.5 Treadmill offers a OneTouch incline and speed control, so you don't have to mess around with buttons in the middle of your workout.

The machine has plenty of leg and elbow space, so you're not bumping into the sides and losing your balance as you run. And the FlexSelect deck cushioning protects your joints for those long, intense training runs.

NordicTrack's innovative space-saver design and EasyLift assist make folding up the unit and storing it out of the way a perfect option for people in smaller living spaces. Another excellent feature of the NordicTrack T Series 6.5 is the iFit on-demand workout app. Users can stream thousands of live and on-demand workouts, all led by elite iFIT Trainers. You can choose between high-energy studio classes or global workouts that bring different parts of the world right into your living room or home gym. Each training session will automatically adjust your speed, decline, and incline to optimize the workout. Each treadmill purchase includes a free 30-day trial of iFit.

Best for Home Office: UREVO 2-in-1 Under Desk Treadmill

The UREVO 2-in-1 Under Desk Treadmill is the best treadmill for home office.

Why It Made the Cut: Perfect for multi-tasking, the UREVO 2-in-1 treadmill is great for people who have time to work out and for people who really don't.

 Dimensions: ‎58 x 29 x 6 inches
— Weight: 68.4 pounds
— Maximum Speed: 7.6 mph

— Two machines in one
— 16.9 inch wide running beltComes fully assembled
— LED Display and remote control
— Wheels make for easy moving and storage
— Phone holder

— Basic display options
— Possible hazard while working

The UREVO treadmill is a multi-purpose machine designed for running or walking. Just leave the frame up for a great running workout or take it down to walk during a Zoom call or Netflix binge session. The 16.9-inch wide belt gives users more room to move around and more freedom during a run, while the non-slip surface will make running and walking a little easier on the body.

Sporting an LED Display with a remote control option, the machine tracks speed, distance, time, and calories while the remote makes changing speeds and stopping the machine simpler. The powerful 2.5HP motor is quietly efficient. The unit quickly switches from walking to running modes and can be easily transported.

Best Smart: Echelon Stride

The Echelon Stride is the best treadmill that's smart.

Why It Made the Cut: Functionality and technology meets intelligent design in the latest addition to the top-notch Echelon product family. The Echelon Stride is powerful—with a top speed of 12 mph—and portable, allowing users to move the unit around the home gym or living space.

— Dimensions: 69.3 x 31 x 49.2 inches
— Weight: 154 pounds
— Max Speed: 12 mph

— Meets strict international safety standards
— Eight preprogrammed workouts
— Equipped with USB charger
— Equipped with speakers
— Secure tablet holder

— High price point
— Monthly fee for app membership

The Echelon Stride puts safety and convenience first and meets some of the strictest safety standards in the world for exercise equipment, thanks in part to a metal safety bar underneath the running deck that protects objects from being pulled under. There are also side steps for getting on and off the machine.

Heart rate sensors are integrated into the handlebars. The machine has a max speed of 12 mph and a max incline of 10 percent. The Echelon Stride comes equipped with Bluetooth connectivity, a USB charging port for other devices, and a built-in steel handle for moving the unit around the house.

Best for Walking: Sunny Health & Fitness SF-T1407M

The Sunny Health & Fitness SF T1407M is the best treadmill for walking.
Sunny Health & Fitness

Why It Made the Cut: The Sunny Health & Fitness SF-T1407M is an excellent option for people who might not be ready to spend an entire paycheck on a treadmill. This machine is compact, lightweight, and moves solely on leg power.

— Dimensions: 49 x 23 x 50 inches
— Weight: 46.7 pounds
— Maximum Speed: .1 mph

— Inexpensive
— Compact and ergonomic
— Easy to move around the house
— Requires no power so can work anywhere in the house
— No assembly required

— Maximum 220-pound weight capacity
— Too bare bones

The Sunny Health & Fitness treadmill is the perfect beginner machine and a smart purchase for anyone looking to start a running habit or find another way to get their daily steps in.

A solid but portable treadmill created for small spaces, the Sunny SF-T1407M treadmill still offers a decent-sized running surface. The treadmill is durable and will withstand power walking or light jogging. That said, the weight limit is 220 pounds so heavier runners should consider another machine.

Best Budget: Xterra TR150 Folding Treadmill

The XTerra TR150 folding treadmill is the best tradmill for a budget-friendly price.

Why It Made the Cut: The perfect machine for people who are unable to spend top dollar but still want a reliable treadmill that will provide a solid workout.

— Dimensions: 63.4 x 28.75 x 51.4 inches
— Weight: 108 pounds
— Maximum Speed: 10 mph

— Lower price point
— Larger running surface
— 12 preset programs
— Grip sensors to track training zones
— Quiet but powerful motor

— Maximum weight capacity of only 250 pounds
— Manual incline setting
— Not many extras

Not everyone has the space or budget to purchase a large, expensive treadmill to use at home. The TR150 folding treadmill from XTerra solves both these problems with a high-quality and compact machine at a lower price.

The folding deck design is quick and easy to use, and the simple-to-read LCD will track the essential parts of your workout like time, speed, distance, calories, and pulse. The hand pulse grip sensors will keep you updated, so staying in your training zone is a breeze.

The deck cushioning provides multiple cushioning points for maximum impact absorption while the steady and almost silent motor powers the treadmill to a top speed of 10 mph. The larger running surface provides ample room for taller users, but keep the 250-pound weight limit in mind.

Things To Consider Before Buying a Treadmill

Before you start rearranging the furniture to accommodate a new treadmill, here are some things to consider:

Goals: What are your reasons for buying a treadmill? Are you looking to get into shape or back into shape? Maybe you're training to run your first race or compete in a half or full marathon? Perhaps the thought of going back to the gym is enough to make you want to stay home for the winter. All of these reasons are valid for browsing this list of the best treadmills for your home.

Usage: If you're interested in getting back into shape without a gym membership, a home treadmill is a good idea. If you're going to spend a substantial amount of money on a treadmill, you should come up with a plan for how often you're going to hop on the machine. You're also going to want to find a spot for the treadmill that will remind you to use the machine (and make you feel guilty when you don't).

Space: The good news for people interested in buying a treadmill is that many models are now foldable and take up far less space than previous generations. That said, a treadmill still takes up a considerable amount of room just standing up straight in the corner. If you're lucky enough to have an extra room to convert into a home gym or don't mind staring at a treadmill during family movie night in the living room, a treadmill is a smart purchase.


Q: Is running on a treadmill better than running outside?

Running on a treadmill isn't any easier or harder than running outside, as both are dependent on the amount of effort a person puts into the run. Both activities have pros and cons. Treadmills allow a runner to set and stick to a pace, but they don't offer much of a challenge unless a runner changes the speed or incline. Outside running offers uphills, downhills turns, and terrain plus something different to look at every time. Of course, the home treadmill is always available for a run, while rain, wind, and snow outside can cancel workout plans. Both options also have a place in a runner's regular routine.

Q: Are treadmills bad for my knees?

This all depends on the activity. If you're jogging or running, treadmills can put some stress on your knees, but so will an outdoor run up steep hills. Increasing the speed on a treadmill and running for long periods could cause a little more irritation on the knees and joints. If you're using a treadmill specifically for walking, a treadmill is no better or worse on the body than walking outside.

 Q: Is the calorie count accurate on treadmills?

The calorie count on most treadmills is accurate as long as the machine asks for your weight before working out. Most devices don't ask for weight and assume the average user is 155 pounds. People under that weight are actually burning fewer calories during each treadmill session. For those interested in getting an accurate reading of calories burned, heart rate, and other vital stats, we suggest buying a smartwatch for tracking purposes.

Final Thoughts

Treadmills are the ideal choice for a home exercise machine, and they're useful for people of all ages and fitness levels. The machine will long outlast your interest in going to the gym and eliminate excuses for not exercising during inclement weather.

No matter your fitness goal, buying a treadmill for your home is a smart purchase. Just do your homework before buying and make a promise to yourself that the machine will be put to good use and not become just another place to hang clothes.

This post was created by a non-news editorial team at Recurrent Media, Futurism's owner. Futurism may receive a portion of sales on products linked within this post.

The post The Best Treadmills of 2023 appeared first on Futurism.



Angiopoietin-like protein 4/8 complex-mediated plasmin generation leads to cleavage of the complex and restoration of LPL activity
Proceedings of the National Academy of Sciences, Volume 120, Issue 7, February 2023.
Self-renewing macrophages in dorsal root ganglia contribute to promote nerve regeneration
Proceedings of the National Academy of Sciences, Volume 120, Issue 7, February 2023.
Learning hydrodynamic equations for active matter from particle simulations and experiments
Proceedings of the National Academy of Sciences, Volume 120, Issue 7, February 2023.
Neural parameter calibration for large-scale multiagent models
Proceedings of the National Academy of Sciences, Volume 120, Issue 7, February 2023.
Angiopoietin-like protein 4/8 complex-mediated plasmin generation leads to cleavage of the complex and restoration of LPL activity
Proceedings of the National Academy of Sciences, Volume 120, Issue 7, February 2023.
Self-renewing macrophages in dorsal root ganglia contribute to promote nerve regeneration
Proceedings of the National Academy of Sciences, Volume 120, Issue 7, February 2023.
Learning hydrodynamic equations for active matter from particle simulations and experiments
Proceedings of the National Academy of Sciences, Volume 120, Issue 7, February 2023.
Neural parameter calibration for large-scale multiagent models
Proceedings of the National Academy of Sciences, Volume 120, Issue 7, February 2023.
Sam Bankman-Fried Ponders Whether He Might Have Done Anything Wrong
Wondering if former FTX CEO Sam Bankman-Fried, who's currently on house arrest awaiting criminal and civil trial, did something wrong? He is, too.

We the Baddies

Wondering if ex-FTX CEO Sam Bankman-Fried, who's currently on house arrest at his parents' crib awaiting both criminal and civil trial for fraud, campaign finance law violations, and other very serious charges, did something wrong? He is, too.

"Just, everyone left," SBF told The Financial Times, detailing how, in his eyes at least, the last few days at the now-bankrupt crypto exchange FTX went down. "I couldn't do it alone. And, if I'm alone, then maybe I'm wrong."

"I am pretty impervious to pressure, but at some point," he continued, "I started to feel like maybe I'm the one who's wrong here."

Us too, Sam. Us too.

Adults in the Room

In the article, the FT lays out a seriously detailed account of the last few more-or-less-functioning days of the once high-flying crypto exchange, reconstructed by way of interviews with unnamed former FTX employees, in addition to written internal and external FTX correspondence. Unsurprisingly, the word "chaotic" seems to barely even cut it.

"It was this combination of a real, physical hurricane and a psychological hurricane," said one former employee. "It was the most crazy, hectic 24 hours of my life. I felt like my worldview was falling apart. FTX was not just a job for me and for other people. FTX was my life."

While most of the fallen FTX's former execs are said to be working with the US government in the case against Bankman-Fried, the 29-year-old Palo Alto native has maintained his innocence, continuing to insist that if he'd been left to helm the company, FTX investors would have already gotten at least some of their money back.

"It felt to me like everyone around me had lost their minds all at once. And everyone is behaving bizarrely poorly," Bankman-Fried told the FT, explaining that, as the walls started to close in, his friends and advisors all seemed to crumble under the pressure. "I did feel sort of like there were no adults left in the room, like everyone is a child now."

Indeed, from the outside, it certainly does seem like FTX was run by too many too-young kids. That, said, though, whether he's criminally convicted or not, Bankman-Fried was the 20-something-in-chief, allegedly still managing billions of assets in Quickbooks and chatting with fellow execs in a groupchat called "Wirefraud." His employees may not have acted like perfect grown-ups, but neither, it seems, did he.

READ MORE: 'Sam? Are you there?!' The bizarre and brutal final hours of FTX [The Financial Times]

More on behaving like an adult: SBF and Caroline Ellison Allegedly Had a Secret Groupchat Called "Wirefraud"

The post Sam Bankman-Fried Ponders Whether He Might Have Done Anything Wrong appeared first on Futurism.

Strange Lines Appear in Saturn's Rings
NASA's Hubble Space Telescope has captured images of strange lines crossing Saturn's rings, heralding the start of the planet's "spokes season."

Acting Up

There's something strange going on with Saturn's beautiful rings.

The gas giant experiences four seasons, much like our own except seven Earth years in length, thanks to its tilted axis. During the planet's equinox, when its rings tilt edge-on to the Sun, mysterious and fleeting new features appear in its rings called "spokes."

As NASA explains, astronomers have started referring to this period as "spoke season," something that has been observed since the early 1980s thanks to NASA's Voyager mission.

Now, the agency's Hubble Space Telescope has captured images of these strange lines crossing Saturn's rings, heralding the start of the planet's most puzzling transitional period.

Spo(o)ke(y) Season

We still don't know why these mysterious spokes appear, let alone why they're seasonal in nature.

"Despite years of excellent observations by the Cassini mission, the precise beginning and duration of the spoke season is still unpredictable, rather like predicting the first storm during hurricane season," said NASA senior planetary scientist Amy Simon in a statement.

But scientists do have an educated guess: the spokes may be the result of changes in Saturn's magnetic fields caused by solar wind. This electrically charged phenomenon, which triggers northern lights back on Earth, may cause icy particles in the planet's smallest rings to float above the rest of the rings, causing them to appear as fleeting shadows in Hubble's observations.

"It's a fascinating magic trick of nature we only see on Saturn — for now at least," Simon said.

Astronomers are now combining data taken by Hubble with observations made by its Cassini probe to get a better handle on the mysterious phenomenon.

READ MORE: Hubble Captures the Start of a New Spoke Season at Saturn [NASA]

More on Saturn: Something Weird Is Happening on Saturn's Snow-Covered Moon, Scientists Say

The post Strange Lines Appear in Saturn's Rings appeared first on Futurism.

Scientists Officially Link Sports Cars to Small Penis Size
Is this article about Tech & Scientific Innovation?
Through a series of cruel experiments, British scientists have found psychological evidence linking flashy sports cars and small penis size in men. 

Through a series of cruel experiments, British scientists say they've found psychological evidence linking flashy sports cars to small penis size in men.

In a yet-to-be-peer-reviewed paper out of University College London, the researchers behind the study — evocatively titled "Small Penises and Fast Cars: Evidence for a Psychological Link" — detailed how they, in their own words, "manipulated what men believed about their own penis size" before asking them questions about luxury cars.

The experiment was deceptively simple: the researchers gave participants "false information" (read: lied to) via an online surveying platform by telling them that the average erect penis size is roughly seven inches (when in reality, most research indicates that it's between five and six inches) and then, primed with that phony statistic, asked them a series of questions pertaining to consumer habits and desires.

"We found that males, and males over 30 in particular, rated sports cars as more desirable when they were made to feel that they had a small penis," the Machiavellian researchers wrote in the study.

According to the paper, there were other versions of this veritable mindfuck of an experiment in which the British researchers "manipulated [subjects'] self-esteem in different ways," but their professed link between the self-perception of having a smaller penis and desiring to own a fancy sports car didn't play out in those.

In other words, it seems a lot as though in an attempt to show that the trope of buying expensive sports cars to overcompensate for having a small penis is "grounded in a psychological truth," these seemingly sadistic researchers ended up playing into the same kind of tired body-negative stereotypes that have for millennia made men act out over a phony sense of shame and inadequacy.

It's no Stanford prison experiment, but perhaps next time psychological researchers will think twice before pandering to one of our culture's worst lies.

More on cruel experiments: Scientist Who Gene Edited Human Babies Says Mistakes Were Made

The post Scientists Officially Link Sports Cars to Small Penis Size appeared first on Futurism.

Well, I never: AI is very proficient at designing nerve agents | John Naughton

Researchers for a pharmaceutical company stumbled upon a nightmarish realisation, proving there's nothing intrinsically good about machine learning

Here's a story that evangelists for so-called AI (artificial intelligence) – or machine-learning (ML) – might prefer you didn't dwell upon. It comes from the pages of Nature Machine Intelligence, as sober a journal as you could wish to find in a scholarly library. It stars four research scientists – Fabio Urbina, Filippa Lentzos, Cédric Invernizzi and Sean Ekins – who work for a pharmaceutical company building machine-learning systems for finding "new therapeutic inhibitors" – substances that interfere with a chemical reaction, growth or other biological activity involved in human diseases.

The essence of pharmaceutical research is drug discovery. It boils down to a search for molecules that may have therapeutic uses and, because there are billions of potential possibilities, it makes searching for needles in haystacks look like child's play. Given that, the arrival of ML technology, enabling machines to search through billions of possibilities, was a dream come true and it is now embedded everywhere in the industry.

Continue reading…
The History of the Lab Rat
Leo has found 1 Regulatory Changes mention in this article
  • The law was a response to pet owners whose animals had been stolen and sold to laboratories.
In my reporting for Discover, I regularly see studies that rely on laboratory rats to answer a variety of questions. One study, for example, considered whether garlic had protective properties against toxins. Another studied rats' hunger and impulse control. And in a study that no one in my family wanted to hear about at dinner time, researchers measured brain activity in decapitated rats. For me, these stories prompted a new question — why do rats show up so much in research? Why are they running through mazes, pushing levers to receive treats or being placed into rat-sized guillotines? The history of the lab rat dates back centuries, and the sequencing of the rat genome in the early 2000s means rat research is providing more insight than ever.  Researching Rats   Long before researchers were getting young male rats drunk to measure the impact of substance abuse on developing brains, scientists were experimenting with rats and nutrition. Prior to 1850, European scientists used Rattus norvegicus (aka Norway/street rat) in nutrition experiments. In 1856, one of the first published rat experiments described the effect of an adrenalectomy in albino rats. It was in the 1890s that the first known rat research came to the U.S. with neuroanatomical studies at the University of Chicago. More than a century later, scientists formed the Rat Genome Database in 1999, allowing them to share information and contribute to the rat genome sequencing. Because rat and human genomes have similarities, the project has enabled studies of genetically-based diseases. Rats and mice are now used in 90 to 95 percent of research involving mammals. A 2021 study in Scientific Reports looked at 16 major research institutions in the U.S. and found that 99.3 percent of mammals used in laboratory experiments were mice and rats. Rat research now informs studies related to Alzheimer's disease, arthritis, cancer research, cardiovascular disease, diabetes, spinal cord injury, cardiovascular disease and substance abuse (hence the aforementioned drunk teen rats).  Why Rats?  Rats are the most common rodent used in research, and there are a variety of reasons why. In terms of behavior, rats are desirable research subjects because they are social creatures. They are also curious and can be trained to perform repetitive tasks. They also aren't fussy eaters and have a varied diet. Rats age quickly and reach sexual maturity after only a few months. Rats typically mate once they are 100 to 120 days old, and female rats reach menopause around 450 to 540 days old. The rodents' rapid aging is helpful to scientists studying the aging process in humans. However, scientists still disagree on how rat age correlates to human age.  But there are similarities between rats and humans that are more straightforward. Rats can be used to study almost every organ system in humans. There are also strong structural similarities between rat and human brains, which enables neuroscientists to use rats to learn more about human brain functions. In rat brains, for example, almost a third of the sensorimotor cortex is designated for processing information sensed by the rat's whiskers. Likewise, the human cortex uses nearly 40 percent to process visual information.  Relying on Rats The need for laboratory rats has increased in the past 60 years and will likely continue to grow. In 1966, the Animal Welfare Act created stipulations for using large mammals such as monkeys or dogs in research. The law was a response to pet owners whose animals had been stolen and sold to laboratories. It regulated how cats, dogs, guinea pigs, hamsters, nonhuman primates and rabbits were sold, transported and handled. In time, the law evolved to include all warm-blooded animals, with the exception of mice, rats and birds. Because the law specifically does not include rats, government officials do not inspect laboratories that use and house rats. Without government oversight, it's difficult to determine how many rats are in the U.S. labs. One study estimated as many as 115 million rats are being used for research. A 2023 Congressional Report noted that animal advocates would like to see more oversight and transparency regarding the use of rats, mice and birds in scientific experiments. Until then, it should be noted that the above studies that involved getting rats drunk, poisoning them or decapitating them occurred outside the U.S. in laboratories in Iran, Italy, New Zealand and Pakistan.
Microsoft CEO Pretty Sure He Can Keep AI From Escaping Human Control
Is this article about Machine Learning?
Microsoft CEO Satya Nadella took time to address concerns over AI safety in an interview with CBS News, and he seems confident about keeping AIs reined in.

Neural Prison

When it comes to the search engine department, 


 is finally giving Google a literal run for its money for the first time in well over a decade. Earlier this week, Microsoft revealed its newly reinvented and AI-augmented Bing search engine, which is powered by a souped up version of OpenAI's ChatGPT. In a conversational format much like the chatbot it's built on, the new Bing can answer almost any question you throw at it with impressive results (though varying degrees of accuracy).

While some employees at the company are sobbing tears of joy, Microsoft's CEO Satya Nadella took the opportunity to address concerns over safely developing its AI in an interview with CBS News.

First, he defended the decision to release the AI to the public, even if it's still full of kinks.

"The only way for any new technology to be really perfected is to be in the market with real human feedback," Nadella said. "If anything, in particular with AI, it has to get aligned with human preferences, both personally and societally, in terms of the norms."

"And yes, we will have many, many mechanisms in place to ensure that nothing biased, nothing harmful gets generated," he added.

Maintaining Control

At the interviewer's prompting, the Microsoft CEO acknowledged that the ominous possibility of an AI going rogue and turning against humanity is a valid concern.

"Runaway AI — if it happens — it's a real problem," Nadella admitted.

"But the way to sort of deal with that is to make sure it never runs away," he confidently avered. Who woulda thunk it!

Beyond stating the obvious, Nadella went on to outline the context in which us humans should use AI to avoid a collision course with a dystopian future.

"The first set of categories in which we should use these powerful models are where humans, unambiguously, unquestionably, are in charge," Nadella said. "And so as long as we sort of start there, characterize these models, make these models more safe and over time much more explainable, then we can think about other forms of usage."

More on AI search: Google's Demo of Upcoming AI Shows It Making Huge Factual Mistake

The post Microsoft CEO Pretty Sure He Can Keep AI From Escaping Human Control appeared first on Futurism.

Small Chunk Breaks Off the Sun, Does a Little Dance
The Sun apparently got a little goofy as scientists observed a small, strange chunk of it breaking off and doing a little jig. 

Dance Dance

The Sun apparently got a little goofy as scientists observed a small, strange chunk of it break off and doing a little jig.

As reports, this rare "polar vortex" was captured by NASA's Solar Dynamics Observatory earlier in February — and beyond that observation, scientists are still trying to figure out exactly what was going on here.


According to Scott McIntosh, the deputy director of Boulder, Colorado's National Center for Atmospheric Research, this fascinating filament is seemingly the first of its kind and may have to do with another bizarre observation: a "hedgerow in the solar plasma," as the solar physicist told, that occurs at exactly 55 degrees latitude at the Sun's poles once per the star's 11-year sunspot cycle.

"Once every solar cycle, it forms at the 55 degree latitude and it starts to march up to the solar poles," McIntosh told the space website. "It's very curious. There is a big 'why' question around it. Why does it only move toward the pole one time and then disappears and then comes back, magically, three or four years later in exactly the same region?"

While scientists have seen filaments break off of the Sun before, they've never seen one turn back in on itself and form a strange whirlwind quite like this one.


Just a few days after this funny little filament was spotted, two back-to-back solar flares that were powerful enough to knock out short-wave radios on Earth were also spotted on the Sun — but those were less noteworthy, if only because we're in the active period of the Sun's 11-year cycle, when these sorts of flares and fanfares occur with more regularity.


Given the timing, we're surely in for more solar treats in the coming years — and hopefully, they won't mess with our Terran communications too much.

More on the Sun: Amazing Video Shows Huge "Snake" Slithering Across the Sun's Surface

The post Small Chunk Breaks Off the Sun, Does a Little Dance appeared first on Futurism.

How to Watch Something Other Than the Super Bowl
Is this article about Lifestyle?
Who cares if the "big game" is on? Here are our picks for the best stuff to stream on Sunday besides that pesky football championship.
This Week's Awesome Tech Stories From Around the Web (Through February 11)
Is this article about Aerospace?


The Original Startup Behind Stable Diffusion Has Launched a Generative AI for Video
Will Douglas Heaven | MIT Technology Review
"In a demo reel posted on its website, Runway shows how its software, called Gen-1, can turn clips of people on a street into claymation puppets, or books stacked on a table into a cityscape at night. Runway hopes that Gen-1 will do for video what Stable Diffusion did for images. 'We've seen a big explosion in image-generation models,' says Runway CEO and cofounder Cristóbal Valenzuela. 'I truly believe that 2023 is going to be the year of video.'i"


A Bold Plan to Beam Solar Energy Down From Space
Ramin Skibba | Wired
"Whether you're covering deserts, ugly parking lots, canals, or even sunny lakes with solar panels, clouds will occasionally get in the way—and every day the sun must set. No problem, says the European Space Agency: Just put the solar arrays in space. The agency recently announced a new exploratory program called Solaris, which aims to figure out if it is technologically and economically feasible to launch solar structures into orbit, use them to harness the sun's power, and transmit energy to the ground."


7 Problems Facing Bing, Bard, and the Future of AI Search
James Vincent | The Verge
"Satya Nadella, Microsoft's CEO, describes the changes as a new paradigm—a technological shift equal in impact to the introduction of graphical user interfaces or the smartphone. And with that shift comes the potential to redraw the landscape of modern tech—to dethrone Google and drive it from one of the most profitable territories in modern business. Even more, there's the chance to be the first to build what comes after the web. But each new era of tech comes with new problems, and this one is no different."


Electric Vehicles Could Match Gasoline Cars on Price This Year
Jack Ewing | The New York Times
"Increased competition, government incentives and falling prices for lithium and other battery materials are making electric vehicles noticeably more affordable. The tipping point when electric vehicles become as cheap as or cheaper than cars with internal combustion engines could arrive this year for some mass market models and is already the case for some luxury vehicles."


Researchers Discover a More Flexible Approach to Machine Learning
Steven Nadis | Quanta
"Apart from applications like autonomous driving and flight, liquid networks seem well suited to the analysis of electric power grids, financial transactions, weather and other phenomena that fluctuate over time. In addition, Hasani said, the latest version of liquid networks can be used "to perform brain activity simulations at a scale not realizable before.'i"


Rolls-Royce Nuclear Engine Could Power Quick Trips to the Moon and Mars
Kevin Hurler | Gizmodo
"The British aerospace engineering company says it's developing a micro-nuclear reactor that the company hopes could be a source of fuel for long trips to the Moon and Mars. …Since the nuclear reactor won't have to carry as much fuel as a chemical propulsion rocket, the entire system will be lighter allowing for faster travel or increased payloads."


The Generative AI Race Has a Dirty Secret
Chris Stokel-Walker | Wired
"The race to build high-performance, AI-powered search engines is likely to require a dramatic rise in computing power, and with it a massive increase in the amount of energy that tech companies require and the amount of carbon they emit. …Martin Bouchard, cofounder of Canadian data center company QScale, believes that, based on his reading of Microsoft and Google's plans for search, adding generative AI to the process will require 'at least four or five times more computing per search' at a minimum."


We Were Promised Smaller Nuclear Reactors. Where Are They?
Casey Crownhart | MIT Technology Review
"For over a decade, we've heard that small reactors could be a big part of nuclear power's future. "Because of their size, small modular reactors (SMRs) could solve some of the major challenges of traditional nuclear power, making plants quicker and cheaper to build and safer to operate. "That future may have just gotten a little closer."


How Our Reality May Be a Sum of All Possible Realities
Charlie Wood | Quanta
"The most powerful formula in physics starts with a slender S, the symbol for a sort of sum known as an integral. Further along comes a second S, representing a quantity known as action. Together, these twin S's form the essence of an equation that is arguably the most effective diviner of the future yet devised. The oracular formula is known as the Feynman path integral. As far as physicists can tell, it precisely predicts the behavior of any quantum system—an electron, a light ray or even a black hole."

Image Credit: Miti / Unsplash

America Has Gone Too Far in Legalizing Vice
Leo has found 1 Regulatory Changes mention in this article
  • Since the Supreme Court struck down previous restrictions on sports betting in 2018, 36 states have legalized it (26 of which allow mobile betting), and new ballot initiatives are proposed every year.

"The cause of a gambling problem is the individual's inability to control the gambling." So says the National Council on Problem Gambling, an organization funded by the gambling industry to help people who have become addicted to its products. This attitude—that anyone who falls into gambling addiction has only themselves to blame—has allowed state lawmakers to ignore arguments that more access to gambling might make it easier for people to lose control. Since the Supreme Court struck down previous restrictions on sports betting in 2018, 36 states have legalized it (26 of which allow mobile betting), and new ballot initiatives are proposed every year. If you've watched a sporting event lately, you've been bombarded with ads for online sports gambling—and this weekend's Super Bowl will be no exception.

Similarly, when marijuana legalization is debated, supporters emphasize how the responsible use of marijuana might alleviate the pain of those suffering from incurable diseases. They also point to the worst excesses of the War on Drugs, which disproportionately affect Black people, though are fortunately getting rarer. This argument has been successful: Only four states still prohibit all uses of marijuana. In 19 states, the recreational use of marijuana is now fully legal; all other states allow medicinal use of cannabis products.

When arguments are made for loosening the government's restrictions on vice, usually proponents make their case with idealistic situations: Shouldn't responsible, independent adults be able to make decisions for themselves about how they spend their money or use their body? This seems appealing, and there certainly are well-informed adults who gamble and use marijuana judiciously. But focusing on these ideal cases and basing our laws on them disregards millions of people who suffer because of their addictions—and it obscures the underhanded tactics of companies who make money off the misery of addicts.

[Stephen Marchin: America's gambling addiction is metastasizing]

These debates expose a conflict over what we believe about virtue and vice. If we think that human beings—especially young people who are forming the habits that will last a lifetime—tend to make decisions based on what they have reasoned to be their best interests, then legalization makes sense. If life is a series of contracts we enter into freely, then there's no reason to keep potential harms off our smartphone or out of storefront dispensaries. However, this way of seeing the world overlooks the fact that our hearts and minds are shaped not only by reason but also by our experiences, affections, and, most important, our habits, which are just as often inexplicably self-destructive as they are reasonable.

A rise in access to legal gambling will inevitably lead to a rise in gambling addicts. Natasha Dow Schüll's book, Addiction by Design, carefully documents how electronic slot machines are designed to get players addicted. One game designer says: "Once you've hooked 'em in, you want to keep pulling money out of them until you have it all; the barb is in and you're yanking the hook." Sports-betting companies have enticed colleges and universities to allow them to promote their products on campus, then offered free bets to lure customers in.

State laws tend to allow the gambling industry to regulate itself, which means that these companies are expected to identify and exclude their steadiest customers. This has been as unsuccessful as one might expect; as much as 50 percent of revenue comes from "problem gamblers," while one study showed that in 1998, only 4 percent of gambling revenue from video lottery games came from "responsible" gamers. Just as tobacco companies would go out of business if people used their products responsibly, gambling wouldn't be a multibillion-dollar industry if it weren't for addicts.

Marijuana has a more complicated legacy, especially because it has real (but rather modest) benefits for medicinal use. However, careful analyses show that marijuana legalization has contributed to a rise in opioid-related deaths, especially when dispensaries can legally sell all sorts of cannabis products. Permitting dispensaries also increases referrals for addiction treatment, which is unsurprising considering that higher-potency products are more dangerous. The best evidence we have suggests that marijuana is harmful to teenage brains as they develop and that more teenagers use marijuana when it is legalized in their state.

The industries that profit off addiction want to frame the question of access around "responsible use" and occasionally suggest that some people might have a genetic predisposition to addiction. This individualistic framing allows them to avoid talking about how much effort they're putting into making their products as accessible as possible. Even more important, it elides the question of whether we are all better off when it's easier to start an addiction and harder to escape one.

There's a richer and more compelling vision, one that is drawn from philosophical traditions across the ages. It recognizes that our life together isn't merely a series of contracts we negotiate, and that our ability to make good decisions isn't based simply on our rationality. Virtue is not simply doing good deeds, but also a set of dispositions and habits that must be practiced in order to flourish. Just as people can be sucked into addictions, we can also work to develop the virtues inside us so that we can be kind, generous, and self-controlled throughout our lives.

[From the December 2016 issue: How casinos enable gambling addicts]

Driven by this rich view of life together, we should make it as difficult as possible to access things that impair our ability to make good decisions. It's not the government's primary job to protect people from their own worst impulses, nor is the state the primary source of our virtue formation. But we do recognize that policy plays a role in shaping the environment so that we can develop our virtues. Just as highways have guardrails for the moments when a driver isn't exercising perfect self-control, so we also need guardrails to help people from driving off cliffs of vice.

People often point to the historical example of Prohibition in America to prove that overregulation of vice carries its own dangers. While the classical tradition of virtue encourages moderation in all things (including moderate regulation and moderate prohibition), this tale is more complicated than the one that exists in the popular imagination. Domestic violence and alcohol-related illnesses were at record highs prior to the passage of the Eighteenth Amendment, and Prohibition was effective at reducing both. There's no evidence that organized crime increased in strength because of Prohibition, merely that it became more visible. In any case, a century later we can design our regulations around gambling and marijuana to protect the most vulnerable people—especially young people—while still allowing those who want to lose some money to do so with a little extra effort and permitting those who could benefit from marijuana to do so under the supervision of a physician.

Some judicious restrictions are better for everyone: Gambling should take place in casinos, not on smartphones, and marijuana should be used only under a health-care provider's supervision. We will need a lot more than a few regulations to help one another grow in virtue—but right now vice and its lobbyists have an unfair advantage that needs to be taken away.

Why We Lose Our Friends as We Age

This is an edition of The Wonder Reader, a newsletter in which our editors recommend a set of stories to spark your curiosity and fill you with delight. Sign up here to get it every Saturday morning.

When I was in college, an acquaintance who had graduated a few years prior came back to visit for the weekend. As we walked around campus on Saturday night, he flung his hands into the cold Connecticut air and exclaimed, "You guys are so lucky; you live a minute away from all your friends. You'll never have this again."

At the time, I thought it was kind of sad—a grown man pining for my life of university housing and late library nights. But his words have stuck with me in the years since. "In adulthood, as people grow up and go away, friendships are the relationships most likely to take a hit," my colleague Julie Beck wrote in 2015. The older you get, the more effort it takes to maintain connections, because you don't have as many built-in opportunities to see your friends every day.

The writer Jennifer Senior noted last year that the fact of our choosing friendships makes them both fragile and special: "You have to continually opt in. That you choose it is what gives it its value," she wrote. But that's also what makes friendships harder to hold on to as our lives evolve.

It's hard but not impossible. Senior notes that when it comes to friendship, "we are ritual-deficient, nearly devoid of rites that force us together." So we have to create them: weekly phone calls, friendship anniversaries, road trips, "whatever it takes."

"Friendship is the rare kind of relationship that remains forever available to us as we age," Senior writes. "It's a bulwark against stasis, a potential source of creativity and renewal in lives that otherwise narrow with time." It's something worth choosing, over and over again.

On Friendship

Oliver Munday

It's Your Friends Who Break Your Heart

By Jennifer Senior

The older we get, the more we need our friends—and the harder it is to keep them.

A woman sits in a chair with a laptop on her knees. Behind her is a collage of colorful silhouettes of friends.
Wenjia Tang

The Six Forces That Fuel Friendship

By Julie Beck

I've spent more than three years interviewing friends for "The Friendship Files." Here's what I've learned.

Two women sitting in chairs talking to each other in the midst of a wide open field at what looks like a concert venue
Millennium Images / Gallery Stock

Why Making Friends in Midlife Is So Hard

By Katharine Smyth

I thought I was done dating. But after moving across the country, I had to start again—this time, in search of platonic love.

Still Curious?

Other Diversions


In one of my favorite editions of Julie's Friendship Files, she spoke with three women who tried an interesting experiment to deal with "the friendship desert of modern adulthood": They entered into "arranged friendships," bringing together a group of strangers who committed to be friends through it all.

— Isabel


Would you consider a donation to support Weekend Reads, and our daily work?

The week at Retraction Watch featured:

Our list of retracted or withdrawn COVID-19 papers is up to 290. There are more than 38,000 retractions in our database — which powers retraction alerts in EndNoteLibKeyPapers, and Zotero. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers?

Here's what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that's not in our database, you can let us know here. For comments or feedback, email us at



Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36102-1

The current use of elastography ultrasound faces challenges, including vulnerability to subjective manipulation, echo signal attenuation, unknown risks of elastic pressure and high imaging hardware cost. Here, the author shows a virtual elastography to empower low-end ultrasound devices with state-of-art elastography function.
Distinct tissue niches direct lung immunopathology via CCL18 and CCL21 in severe COVID-19
Is this article about Biopharma Industry?

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36333-2

 with SARS-CoV-2 has been linked with substantive inflammation, lung pathology and development of 
. Here the authors spatially associate 
 in distinct tissue niches with lung pathology of severe COVID-19.
iRobot Roomba Combo j7+ Review: Beautiful Vacuum, but Directionless
Leo has found 1 Mergers and Acquisitions mention in this article
  • Last summer, iRobot—Roomba's parent company—announced that they had finally agreed to be acquired by Amazon for $1.7 billion.
Beautiful, innovative hardware can't compensate for a poor software experience.
How to Survive If You Were Part of the Tech Layoffs
Is this article about Insurance?
Big Tech has gotten much smaller over the past few months. If you got a pink slip and you're planning your next move, we have tips to help.
Attachment Style Isn't Destiny

The panic set in at the same point every semester: Whenever Ximena Arriaga, a psychology professor at Purdue University, got to attachment theory in her course on close relationships, the classroom grew tense. When she described how people who are anxiously attached can sometimes be demanding and vigilant—and that can drive their partners away—certain students looked disturbed. "I could just see in their face: I'm so screwed," Arriaga told me. When she explained how avoidantly attached people might feel overwhelmed by emotional intimacy, other students seemed so uncomfortable that they physically shrank back. Some would approach her after class and ask: "Is there any hope for me?"

These students were likely misinterpreting attachment theory in a way that experts told me they see all the time. The theory posits that there are three main attachment styles: securely attached people are trusting, and believe that others are generally worthy of trust; anxiously attached people long for closeness, but are paranoid that others will hurt them and are thus preoccupied with validation; avoidantly attached people, driven by the same fear of abandonment, keep others at arm's length. (More recently, some researchers have argued there is a fourth style: "disorganized," a combination of anxious and avoidant.) The common misconception is that one's style is set in stone during childhood, determined by connections with early caregivers, and doomed to play out in every relationship thereafter.

The reality of the theory is more complex than that. Your attachment style is not so much a fixed category you fall into, like an astrology sign, but rather a tendency that can vary among different relationships and, in turn, is continuously shaped by those relationships. Perhaps most important, you can take steps to change it. So Arriaga could give her concerned students good news: Attachment style isn't destiny.

[Read: I gave myself three months to change my personality]

You can't really blame people for misunderstanding attachment theory, given how significantly it's evolved since its conception. In the 1950s, the psychologist John Bowlby proposed the term attachment to describe the bond between infants and their mothers (fathers weren't considered particularly relevant at the time). His big idea—that the quality of a mother's care would essentially predict her infant's future well-being—built on another famous line of research that started the same decade: Harry Harlow's monkey studies.

In a series of experiments, Harlow, a University of Wisconsin psychologist, separated baby rhesus monkeys from their mothers and placed them in cages. In one study, each monkey was alone with two "surrogate mothers": one made of wire, which dispensed milk, and the other made of terry cloth, which did not. The monkeys overwhelmingly preferred the milkless but softer cloth monkey, cuddling up to it and running to it when frightened. In another study, when the baby rhesuses were deprived of any mother at all—real or fake—they seemed to lose their ability to socialize. Some stopped eating and eventually died. The ethics were dubious, but the takeaway was considered monumental: Children depend on their mothers not just for nourishment but for comfort—for an emotional bond seemingly so crucial that it was almost magical. Bowlby called that bond "attachment," and he believed that it formed a blueprint for all subsequent relationships. The effects of a mother's nurturing—or the consequences of her failures—were forever.

But Harlow's later research complicated that idea. When he put baby monkeys together—still with no surrogate or real mother—they fared much better than when they were in total isolation. And even those who'd been completely isolated for the first six months of life "achieved essentially complete social recovery" when placed with other monkeys. Michael Lewis, who directs the Institute for the Study of Child Development at Rutgers University's medical school, told me that researchers have realized something similar about human attachment: a mother-infant bond, or lack thereof, doesn't solely determine the health of the child's future relationships. Children are influenced by not just their parents but a whole world of other connections: peers, siblings, grandparents, neighbors, teachers. And early experiences aren't the only ones that are important. Researchers have found little correlation between childhood and adult attachment styles.

[How much alone time do kids need?]

That doesn't mean that attachment theory is bunk. Adults really do tend toward an attachment style—but it's multiply determined, which means that if you had a difficult childhood, you're not doomed. And although early theorists conceived of distinct attachment-style groups, researchers have since found that people fall not into an attachment bucket, but rather along a spectrum. Most people aren't too far apart on it. William Chopik, a psychologist at Michigan State University, put it this way: "Maybe you're a little bit more avoidant than me, or you're more secure than your other friends. There is a sense in which we're differing by, like, decimal points."

Some researchers have started referring to attachment "orientation," rather than "style," seemingly to avoid implying that it's a static personality trait. Amir Levine, a neuroscientist, Columbia University psychiatrist, and co-author of Attached, told me you can think of an attachment orientation as a working model of the world: a set of beliefs that are constantly put to the test. Those beliefs stem largely from the interactions you've already had—but your subsequent interactions keep shaping your expectations, which means that your working model can keep evolving.

In fact, it's likely to. On average, people tend to grow toward security as they get older. That might be because we accumulate more evidence that the people in our lives aren't going anywhere. "When you're married to someone for 40 years," Chopik told me, "hopefully you stop freaking out about whether or not they're going to be there the next day." There's also a "natural mellowing out that happens with age"—people tend to get better at social interactions, and more comfortable in their own skin.

Attachment style doesn't just change over the arc of your life. It can also vary from moment to moment (people tend toward insecurity when they're stressed) and across different relationships. Marisa Franco, a University of Maryland psychologist and the author of Platonic: How the Science of Attachment Can Help You Make—And Keep—Friends, told me that it's not uncommon, for instance, to have a more secure attachment with a partner than with friends. Unlike a romantic relationship, which might follow a more predictable structure—meeting, moving in together, perhaps getting married—and typically involves a more formal commitment, friendships can be full of ambiguity, which can lead us to fall back on old working models. Within a relationship category, too, your attachment style can differ; you might have a secure relationship with one warm, reassuring friend, and a less secure one with someone distant and flaky.

[Read: The trait that 'super friends' have in common]

For that reason, several researchers told me, if you want to work toward security, you might need to change who you're spending time with. People on the anxious side might flourish with someone who's particularly reassuring and present; people on the avoidant side might need someone who can give them space while still being supportive.

But Arriaga offered a caveat: Her research has shown that although reassurance can help anxiously inclined people in the short term, relying on it isn't always good for them. They can also benefit from pursuing a sense of self-efficacy—working on feeling more inherently worthy, and less dependent on others to tell them they are. In one study, for instance, she found that new parents who felt competent in their novel role displayed lasting increases in security. Other studies suggest that pursuing and succeeding in goals can do the same.

Attachment orientation is complex; it's an ongoing interaction between the external world and your internal one, between your circumstances and your interpretation of them. Separating the two can be hard. For instance, when people struggle with anxious attachment, Franco pointed out, they're apt to notice signs of rejection while overlooking signs of acceptance. But knowing that your working model might not match reality, that it can change, and wanting to change it does make a difference. One of Chopik's studies found that just wanting to become more secure was associated with more actual change in that direction over a four-month period, compared with subjects who didn't express a desire for change.

This is what Arriaga wanted to impart on her students: You may not pull yourself up from being the least to the most secure person in the class. You certainly can't undo the experiences you've already had—the ones that might've led you to grasp too hard for connection or push it away. But you will have new experiences; you'll likely meet people you can count on, and hopefully you'll start to believe that you can count on yourself, too. So when they ask her if there's hope, her answer is: "Of course."

The Quiet Desperation of Tom Brady

A few years ago, I asked Tom Brady if he ever worried that too much of his life was consumed by the game of football. This was, in retrospect, kind of a duh question to put to someone who played, you know, the game of football for a living. Rather successfully, too, and for a long time.

Brady confirmed the question's premise that, yes, football meant pretty much everything to him and he could not imagine doing anything else with himself. "I'm not a musician, not an artist," he told me, among other noninterests and non-hobbies. "What am I gonna do, go scuba diving?"

I took the glibness of Brady's answer as a sign that he wasn't particularly worried about the total commitment to football that he had so proudly made, and that had been such a celebrated hallmark of his afterthought-to-legend story. But then, Brady was still an active football player at the time, with years left to run in an epic career that finally ended last week after 23 years.

I've been thinking about that discussion since Brady dropped his semi-surprising "I'm retiring, for good" video. In particular, I've been thinking about a slightly different follow-up question I posed to him on the same theme: whether he worried that a vacuum might await him on the other side of quitting.

"You need a purpose when you wake up every morning," Brady told me, his voice turning quite serious. "When I don't have the purpose of football, I know that's going to be a really hard thing for me." In other words, Brady knew how scary retirement would be—much scarier to him than any 300-pound pass rusher ever was.

I had gotten to know Brady and his family while researching a story for The New York Times Magazine in the lead-up to what would be his fourth Super Bowl championship, a 28–24 victory for the New England Patriots over the Seattle Seahawks, in 2015. The article focused on what was, even then, the miracle of Brady's longevity in a league where the average player's career lasted just over three years. In the gallows lexicon of pro football, NFL stands for "Not for Long."

Brady has, of course, been the longest-running exception to football's short-timer rule. He was 37 at the time of our conversation, elderly by NFL standards. The story's headline was "Tom Brady Cannot Stop." And he still had eight years to go before he finally did.

I was struck then by how determined Brady was not only to win games but also to blow up the actuarial tables governing how long a quarterback should be allowed to participate in them. How did he pull this off? Everyone focused on what he was willing to sacrifice—his family, his safety, Big Macs. But I always felt that his extreme commitment obscured a more distressing factor in his decision to keep playing: the desperation behind it.

[Scott Stossel: The bathos of Brady]

Brady's protective circle of friends and family have always worried about how he would cope without the structure, mission, and intensity of football. They are worried now. "I think he is going to have a huge void in his life," Brady's father, Tom Brady Sr., told me when I reached him by phone last week, a few days after his son's retirement announcement.

Tom Sr., a delightful man who refers to himself as "the Original Tom Brady" and "the Old GOAT," was sitting in his Bay Area office, a five-minute drive from the San Mateo, California, home where he and his wife, Galynn, raised their four children and still reside. Original Tom is 78, founded a small insurance agency 51 years ago, and says he has no plans to retire himself. "But then, I don't have to get hit all the time in this job, like Tommy did," he told me.

Being his own boss allowed Tom Sr. to travel with his wife to nearly every game of his son's career, including four years at the University of Michigan and the astonishing 10 Super Bowls that Junior played in. What were Brady's dad's plans for Sunday's Chiefs–Eagles collision in the Super Bowl?

"I guess I'll be watching," he said. "Indifferently."

But he added that his new remove as a fan would be nothing compared with what the newly retired and greatest-ever quarterback will face. "Nothing will ever replace the joy Tommy had playing in football games, hanging with teammates, and joshing around in the locker room," Brady Sr. said. "Somehow he's going to have to find a substitute for that, just like every other guy has."

Not every guy has managed, and many have suffered. The physical aftermath of football is well cataloged—the ravaged bodies and brains, the proportionally higher death rates. But the psychological, sociological, and even spiritual turmoil of post-football lives can be equally brutal. No shortage of people around the game have testified to this. "The longer you play, the more you get used to the lifestyle," Mark Murphy, a former Washington Redskins defensive back and the longtime president of the Green Bay Packers, told me for a book I wrote about the NFL in 2018. "You can lose touch with reality."

The "reality" of football, such as it is, can be extremely different from the "reality" off the field. "Football was an island of directness in a world of circumspection," wrote Frederick Exley in his classic 1968 "fictional memoir," A Fan's Notes. "There was nothing rhetorical or vague about it."

[Mark Leibovich: The dark pageant of the NFL]

Brady has said as much in a million different ways, and always made clear which version of "reality" he preferred. "Sports is very real-time," he said in a podcast interview after he won his seventh and last Super Bowl, in early 2021. "What you see on that field from me is really me; it's not an actor. This is my life. These are my real emotions. This is real joy. This is real anger. This is real disappointment. And those things are a really vulnerable place to be."

Yet in some ways, I've never seen Brady so vulnerable as he has been in the years since that last championship, as he struggled with the wind-down of his playing days, his aborted retirement last February, his unretirement 40 days later, and the various other disorders served up by that other messy "reality" outside football. By then, his career plans had become their own annual cliff-hanger. When will Tommy finally quit? Retirement decisions are hard enough in private without everyone tossing out takes about whether you're too old, acting selfishly, or need to leave. Joey in the White House can probably relate.

"You know, I'm 45 years old, man. There's a lot of shit going on," Brady said at a press conference this past August after an unexplained 11-day hiatus from training camp—not long before he would announce the end of his 13-year marriage to the model Gisele Bündchen. "So, you've just got to try to figure out life the best you can."

What will Brady do with his life now? He says he will devote more time to his family, particularly his kids. In May of last year, Fox Sports announced a deal, reported at $375 million over 10 years, for Brady to call NFL games as soon as he finished playing. Brady's announcement last week even ignited speculation—a mini-cliff-hanger!—that he would make his debut this weekend on Fox's Super Bowl broadcast. But he spiked that idea when he told FS1's Colin Cowherd that he would not begin his work for Fox until the fall of 2024. That would give Brady plenty of time to settle into a post-football routine, take up scuba diving, or maybe pursue a job opening as quarterback with his boyhood team, the San Francisco 49ers. (Kidding about the last one—sort of.)

At the very least, Brady now seems to have figured out how to retire smoothly, relative to last year's stutter step of leaks, denials, and eventual reversal. His homemade 53-second video, filmed on a beach (apparently) in Florida, was praised as gracious and heartfelt. Several commentators observed that he seemed "at peace" with his decision, as if a burden had been lifted.

Maybe it has, but any aura of peace was lost on me. What struck me more was the waterless and overcast tableau of the video, the rows of high-rises in the background—Tom Brady all alone in a world of gray.

The Case for a Four-Day Workweek in Maryland
Is this article about Politics?

The Maryland State Capitol building is older than America. It is the only state capitol to have also served as the nation's capital; in the country's earliest days, Congress met in its chambers. To work in Annapolis is to operate in the shadow of history. So maybe that explains why, 246 years into the American project, one state lawmaker sees his four-day-workweek bill as carrying on in the tradition of the ideals of the Declaration of Independence. That, or it's just a good hook.

"The Framers put in 'Life, Liberty, and the pursuit of Happiness,'" Vaughn Stewart, one of the bill's co-sponsors, told me, emphasizing that last one. "This is really a larger conversation about where we are as a country, and whether we need to ask ourselves, for the first time in almost a century, if there is something better than living to work."

The very buzzy—but actually kind of modest—bill would create what is effectively a five-year experiment with a four-day workweek, creating $750,000 in tax credits for Maryland businesses per year over five years in exchange for them shortening their hours and handing over data to the state on how it goes. "It's going to be really hard for me to persuade my colleagues that the time is now for this idea if the only data we have come from Scotland," Stewart explained to me. "That's just not going to be as persuasive as if it comes from Scotland, Maryland." (Yes, that's a real place.)

[Read: Kill the 5-day workweek]

Despite the practical approach, Stewart is a hard-core believer in the four-day workweek as the future. When I got on the phone with him last week, we spoke about the bill, work's place in American life, and how surviving cancer shifted his perspective.

Our conversation has been condensed and edited for clarity.

Caroline Mimbs Nyce: Give me the elevator pitch for this bill. Why should it get passed?

Vaughn Stewart: There has been an explosion of studies in the past year or two about the idea of companies reducing work hours. And the results of those studies have been, in my view, stunning. The employees loved working fewer hours. But what was really surprising is that the companies themselves reported greater productivity and ultimately greater profits. At the same time, companies outside of the context of an experiment are also choosing to make this shift.

So the elevator pitch is: Given the obvious success of reducing work hours that we're seeing in businesses across the country and across the world, it only stands to reason that Maryland should try this out.

Nyce: Why Maryland?

Stewart: Well, Maryland is where I live, so I can't put it in state legislation anywhere else. (Laughs.)

Nyce: Fair, but do you think Maryland is a particularly good candidate for this?

Stewart: Yeah, I think so. One of Maryland's nicknames is actually "America in Miniature." This idea has been studied in the United States to some extent but has been studied more heavily in Europe. The point of the bill is to get data that's more local and more relevant to us. You can make the case that Maryland—because it has so many different slices to it and so many different parts and so many distinct cultures and economies—is more representative of the entire United States than most other states.

Nyce: And why solve this in the public sector? If the incentives are there, why not let the private sector figure it out and move that way naturally?

Stewart: That's happening to some extent. But two things: One, I think having the public sector get involved serves as a gentle nudge in this direction. Inertia is a powerful force.

Second, it's not so much that the public sector is getting involved in private businesses as it is that we're paying companies using tax credits to collect data for us and share the data. At the end of the five-year pilot project, we'll have this trove of data that we can use to gauge whether this was successful.

There have been four-day-workweek bills in the U.S. House and in California that essentially required companies to pay overtime after 32 hours. That's a really clean and also free way for the government to go about this. But it's also extremely heavy-handed. I think probably the reason those bills haven't gone anywhere is that companies come out in full force to rail against government intervention in the marketplace.

What's unique and new about this approach that we're trying in Maryland is that we're not forcing any companies to do anything they don't want to do. Rather than going to the hearing for the bill with every industry group cursing my name, hopefully I can go to the bill hearing with all of them standing beside me.

[Derek Thompson: The five-day workweek is dying]

Nyce: We're talking a lot about the practical politics of this. How much of a philosophical believer in the four-day workweek are you?

Stewart: I'm a believer. I definitely think that we need more Maryland-specific data if we're going to make any future steps or commit any more money to it. But ultimately, I'm not a neutral observer—I'm not a social scientist; I'm an advocate for this.

Nyce: But you have to play the game a little?

Stewart: Yeah, of course. I mean, it's not a game. We need to make forward progress. If we want to convince people—workers, other policy makers, business owners—that this is the way forward, we need more study results that are specific to relevant communities.

It's going to be really hard for me to persuade my colleagues that the time is now for this idea if the only data we have come from Scotland. That's just not going to be as persuasive as if it comes from Scotland, Maryland, and Berlin, Maryland, and Cambridge, Maryland.

Nyce: There are a lot of European-named cities in Maryland.

Stewart: I just rattled them out like that too; I'm actually kind of impressed with myself.

I'm not at all a dispassionate observer of this. I very much think that this is the way of the future. This is the original American dream. The thought was always that we were going to continue to be more and more productive and work less and less. But at a certain point, we stalled out.

It was extremely radical when Henry Ford moved to a five-day workweek. People were shocked. People even called it anti-biblical, because the Bible said there was only one rest day. The labor movement took this as their cause célèbre, and over a grinding series of decades, they were able to force states and the federal government to institute a five-day week as a matter of law. But it's been 100 years. Somewhere in the '80s or '90s, we got sort of off track. Now you hear more about, like, the #grindset than anything about reduced work hours.

Nyce: So you're really viewing this in the long arc of American labor history. Do you ever have any personal doubts?

Stewart: I do think that there's one big question mark with this idea, at least for right now, which is: How do we make sure that the effects reverberate across the economic spectrum? Because right now, with the exception of maybe some hospitals cutting hours for nurses, the companies that have made this step so far are KickstarterShake Shack, Shopify. Typically, white-collar employees are the ones benefiting from more flexible schedules and reduced schedules,—just like how in the pandemic, white-collar employees benefited from more flexible time and flexibility to come into the office, whereas blue-collar workers didn't get that break. They still had to go in every day and punch a clock, even if it meant that they were going to expose themselves to getting sick.

The tricky thing here is that there's a difference between salaried workers and hourly workers. And we've got to figure out a way to make sure that this bill—or this movement—doesn't become something that is felt most viscerally by people that already are doing pretty well. We want to make sure that this is an economy-wide transformation, that if it helps any group in particular, it helps those who are working-class the most—because they're the ones who have borne the weight of America's overworked culture for the past several decades.

Nyce: Are there any criticisms of the bill that you just flat out don't agree with?

Stewart: First of all, I don't think I've ever introduced a bill that is just this broadly popular. There was a poll on this issue recently and 92 percent of Americans like the idea of reducing work hours.

You don't really hear a lot of good-faith criticisms. Probably the criticism that is valid, but that I just don't agree with, is the sort of quasi-libertarian idea that the state of Maryland just shouldn't care about this—that we shouldn't meddle in the affairs of private businesses. There's nothing more fundamentally connected to a Marylander's quality of life than how much free time they have. So the idea that that would not be in the purview of policy makers to me is insane.

But even more than that, it's hypocritical. I've heard from several colleagues who, when past tax cuts or tax credits for corporations have come up, couldn't have been more enthusiastic to give those away. But now all of a sudden they're crying libertarian.

Nyce: Why do you think that is?

Stewart: This bill is connected with the idea of improving the day-to-day lives of regular people. And I think for people who are ideologically committed to comforting the comfortable, it's an anathema that they would support something that cuts costs for companies but through the lens of trying to make workers' lives a little bit more whole.

Nyce: Like your colleagues at the statehouse?

Stewart: Yeah. I have colleagues in the other party who applauded, for example, when President Donald Trump cut taxes for corporations. Now, I don't have any indication there's going to be widespread Republican opposition to this bill. But I have heard a couple of quotes in the media from some of my colleagues who seem like maybe they're going to oppose this on the grounds of laissez-faire capitalism—let the markets work.

But honestly, I haven't really heard very much pushback at all about this. There has been an explosion of interest in the bill.

This is my fifth year in the General Assembly. This bill has attracted more attention from my colleagues, from interest groups, and from the media than every other bill I've ever put in has combined. And, like, 95 percent of the interest has been positive.

Nyce: We talked through one philosophical criticism from libertarians about the role of the state. That's pretty much their whole gig. I wonder if there's a practical criticism here: Why is this something Maryland should spend money on versus all the other issues that are facing the state at any given time?

Stewart: That's a tough one. If the bill doesn't pass, I think that's what will doom it. Because even though we have a budget surplus in the state of Maryland, it certainly is the case that anytime you want to spend money, you've got to compete with every other priority under the sun. And I'm sure some of those priorities are more pressing and more important than this bill.

But this is only $750,000 of tax credits. This is not going to break the bank in the grand scheme of the state budget. And I would add that there's scarcely anything more important to humanity than free time.

Nyce: Well … like, health. Maybe "not dying of the coronavirus."  

Stewart: Sure. Yeah, I mean, "not dying."

The Framers put in "Life, Liberty, and the pursuit of Happiness." Certainly life is important. We want you all to be healthy. Liberties are important as well, obviously, and all the different freedoms we enjoy and making sure that those hold true. But pursuit of happiness is something that is also really important. This is really a larger conversation about where we are as a country, and whether we need to ask ourselves, for the first time in almost a century, if there is something better than living to work. America once stood for better ideals than just eternally increasing wealth and everlasting consumerism.

The reason I get so fired up about this is I've actually had cancer twice.

Nyce: I'm so sorry to hear that.

Stewart: No, no, I appreciate it. And I'm all good now. It kind of puts it in perspective—all the different tropes and truisms and clichés about realizing that nobody's guaranteed tomorrow. I think when you have that experience at such a young age, you realize how important time is. Time is a gift. And so the idea that there would be a bill but also a larger movement about reclaiming some of that time for ourselves—because it's finite for all of us—I think that that has some real power for a lot of people. Whether they have gone through an illness or an accident or they've watched a parent or a grandparent get older, I think people realize somewhere deep in their bones that their time is valuable. And they want to reclaim some of it for themselves.



Small molecule inhibitors of 15-PGDH exploit a physiologic induced-fit closing system

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36463-7

Inhibition of 15-prostaglandin dehydrogenase (
) is a promising therapeutic target for regenerative medicine. We report the structure of 15-PGDH in complex with two different inhibitors. Unexpectedly, access to the binding pocket is regulated by a dynamic "lid" of the enzyme.
Scientists Attempt to Map the Multiverse
Is this article about Space?
This story was originally published in our Mar/Apr 2023 issue. Click here to subscribe to read more stories like this one. You live on this planet — and you're of a certain age — there's a decent chance you've seen the classic Star Trek episode "mirror, mirror," in which Captain Kirk and several members of the enterprise find themselves in what appears to be a parallel universe. The trouble starts when they attempt to beam up from a planet during an ion storm. Something goes wrong. They appear aboard the Enterprise, but things are askew: Crew members greet the captain with Nazi-style salutes, and First Officer Spock sports a goatee. Observing these small but significant differences, Kirk muses that the crew has materialized in "a parallel universe coexisting with ours on another dimensional plane." These days, one parallel universe is hardly enough for science fiction. Instead, it seems the entire multiverse is having its Hollywood moment. Films like Doctor Strange in the Multiverse of Madness and Everything Everywhere All at Once entice the viewer with multiple versions of various characters and a dizzying array of alternate realities. Though they're not particularly heavy on the physics, these films are definitely latching onto something. The idea of the multiverse — the provocative notion that our universe is just one of many— has fully cemented itself in mainstream pop culture. (Or, at least, in the current phase of the Marvel Cinematic Universe.) Its appeal as a storytelling device is obvious. Just as time travel allowed Marty McFly to experience different timelines in the Back to the Future series, multiverse tales allow characters to explore a multitude of worlds with varying degrees of similarity to our own, as well as altered versions of themselves. While Hollywood can't seem to get enough of the multiverse, it remains deeply controversial among scientists. Ask a prominent physicist whether they believe in a multitude of universes beyond our own, and you'll get either a resounding yes or a vehement no, depending on whom you encounter. Advocates on the two sides show no mercy toward each other in their books, on their blogs, and, of course, on Twitter. But physicists didn't pull the idea out of thin air — rather, several distinct lines of reasoning seem to point to the multiverse's existence, bolstering the idea's merit. Sabine Hossenfelder, a theoretical physicist at the Frankfurt Institute for Advanced Studies, has called the multiverse "the most controversial idea in physics." The debate over the existence of unseen universes may seem rather pie-in-the-sky. After all, how could worlds that we can never visit — or even detect — possibly affect anyone's life? But the stakes are higher than they appear: Critics caution that legitimizing the multiverse could make it harder for the public to distinguish science from speculation, making it more difficult to keep pseudoscience at bay. (If scientists can't agree about how many universes exist, how can the public be sure there's a consensus on the reality of climate change, or the efficacy of vaccines?) Writing in the journal Nature in 2014, physicists George Ellis and Joe Silk describe the debate over the multiverse as a "battle for the heart and soul of physics." Evolving Models Philosophers have pondered a multiplicity of worlds at least since the ancient Greeks. But it was only in the 20th century that astronomers and physicists began to talk about multiple universes in the terms we use today. In the 1920s, astronomers found that distant galaxies are moving away from each other, implying that the universe itself is expanding. If you ran a recording of the history of our cosmos backwards, the galaxies would be seen rushing toward one another. The inescapable conclusion was that, in the remote past, the universe was much smaller, denser, and hotter. This discovery gave rise to the Big Bang model of cosmology, which describes how the universe evolved over the past 13.8 billion years from an ultra-dense blazing fireball to the enormous and vast expanse we know today. The first pathway to suspecting there might be a multiverse emerged when scientists found problems with this original Big Bang model. The universe today is highly structured. Matter has clumped together to form stars, planets, and galaxies, while the space between these objects is nearly empty. And yet under the Big Bang model, the very early universe is believed to have been incredibly homogeneous, with every part just about as hot and dense as every other part, like a cup of hot chocolate that's been thoroughly stirred. So how did today's clumpy, structured universe come about? In the 1970s and '80s, a handful of physicists, led by Alan Guth, Andrei Linde and Alexei Starobinsky, put forward a modified version of the Big Bang, known as inflation. In the inflation model, some tiny bit of space-time underwent a stupendous (if brief) growth spurt, lasting no more than a trillionth of a trillionth of a second. This exponential expansion enlarged miniscule variations in the distribution of matter throughout the universe. Over time, those variations grew to be the galaxies and clusters of galaxies that now pepper the cosmos, containing within them countless stars and planets. But if inflation could blow up one bit of space-time, why not many bits of space-time? Why shouldn't inflation be happening continuously, creating new universes all the time? There didn't seem to be any way to constrain inflation so that it yielded just one universe — and so the notion of "eternal inflation" was born, and the idea of multiple universes with it. In this view, little pocket universes — Stephen Hawking preferred the phrase baby universes — are continuously popping up, with the tally of new universes endlessly increasing. (In the context of "Mirror, Mirror," we might imagine a universe where Spock has a full beard or mohawk, alongside an infinite number of other scenarios.) Some physicists welcomed this multiplicity of universes. In his lecture slides, Linde, one of eternal inflation's greatest champions, has depicted these universes as little colored spheres, bubbling up and creating new bubbles as they evolve, like a frothy pot of boiling water. He is on record as saying he'd bet his life that the multiverse is real. Others are more cautious. Andreas Albrecht, a theoretical physicist at the University of California, Davis, who alongside Princeton University theoretical physicist Paul Steinhardt helped shape inflation into its modern form, finds eternal inflation troubling. That trouble stems from the idea of infinity itself. To be sure, infinity is no problem for mathematicians scribbling equations on blackboards. But physicists strive to describe the real world, where one doesn't encounter an infinite number of anything, let alone universes. "At the end of the day, my physics instincts pull me away from eternal inflation," Albrecht says. Steinhardt points to another shortcoming of eternal inflation: The theory says nothing about what any one particular universe will be like. "The problem is, now you have a theory which makes no predictions," he says. "For any property that you can imagine, the opposite of that property also occurs, an infinite number of times." Hossenfelder is equally skeptical: "In eternal inflation, they say they have all these universes popping up. And I'm like, well, where are they popping? Of course, no one sees anything popping; it's just there in the mathematics." Through the 1980s and early '90s, even with inflation slowly solidifying its status as the go-to model of the early universe, the idea of eternal inflation remained little more than a sideshow. Most physicists didn't worry too much about the (alleged) extra universes. Out of sight, out of mind, as it were. However, another idea from the frontiers of physics was brewing at around the same time — and it seemed to lend support to the many-universes idea. This new approach came from string theory, the notion that the universe is made up of tiny, vibrating strings, far smaller than anything we could see through our best microscopes, or even detect with our most powerful particle accelerators. String theory's equations allow for a multitude of solutions, each corresponding, physicists have suggested, to a distinct universe. And so, like eternal inflation, string theory appears to allow for a staggeringly vast array of universes. Stanford University physicist Leonard Susskind described the resulting picture as a "landscape" of universes, seemingly echoing the multiverse given by eternal inflation. In fact, many physicists believe the two ideas are intimately related. "You can't separate them," says Sean Carroll, a theoretical physicist at Johns Hopkins University. "One is saying that different regions, where there are different local laws of physics, can possibly exist; that's what the string theory landscape is saying. Inflation is saying, 'And they become real.' " Measurable Predictions The multiverse controversy is rooted in the notion of testability. If we can't interact with these other universes, or detect them in any way, some experts insist that relegates them to mere philosophical speculation. But multiverse proponents see it differently: There may be very good reasons to believe in the multiverse, they argue, even if we cannot poke at it or glimpse its many universes. Albrecht, like many physicists, was never comfortable with the version of the multiverse suggested by eternal inflation or string theory. Still, he found himself drawn to another kind of multiverse — the one offered by the "many worlds" interpretation of quantum mechanics. First, a Quantum 101 refresher: Quantum mechanics rests on the idea of a wave function, a kind of mathematical recipe for predicting where a particle will be, or how it will be moving, at some particular moment. Wave functions evolve over time; that evolution is governed by the Schrödinger equation, roughly analogous to Newton's equation F = ma (force equals mass times acceleration). Where Newton's physics determines the path of a thrown baseball, Schrödinger's equation predicts the future state of a quantum system. The catch is that we cannot know what state a quantum system is in until we measure it. Prior to measurement, it can even be in a superposition of states; that is, it's in many states all at once. Sound familiar? To demonstrate the principle, consider an electron. According to quantum mechanics, an electron can spin in two different ways ("up" and "down," in physics terms). Before you look at the electron, the theory says its spin is indeterminate; it can be in both states, spin-up and spin-down. But when you actually measure the electron's spin, the wave function "collapses," and the superposition goes away; you're left with one spin or the other. This view is called the Copenhagen interpretation of quantum mechanics, after the city where its first proponents, Niels Bohr and Werner Heisenberg, worked. Some, like Erwin Schrödinger, worried about the possibility of quantum effects scaling up and impacting the everyday world — his famous alive-and-dead cat is the quintessential example. The standard view was that, if you could somehow maintain a cat in a superposition of states (current thinking suggests this would be astoundingly difficult), the wave function of the cat would collapse when observed, just as with an electron. Some physicists find the Copenhagen interpretation unsatisfying. Why do measurements cause wave functions to collapse, and what qualifies as a measurement in the first place? Maybe, a few thinkers have suggested, the wave function doesn't collapse. Ever. Instead, when we make a measurement, the universe divides, or branches, creating a brand-new universe for each possible outcome. (Some experts caution against this phrasing as being overly simplistic, but it will do for our purposes.) When we look at that electron, the universe splits in two, with one universe containing a spin-up electron and one containing a spin-down electron. Schrödinger's cat is similarly tamed: In one universe, the cat lives; in another, it dies. These universes also contain unique copies of you — or, unique copies of universe-hopping Evelyn Wang, in the case of Everything Everywhere All at Once. This many worlds view of quantum mechanics was first set out by physicist Hugh Everett in the 1950s, and has slowly gained followers in the decades since. Albrecht is one of them; he sees the idea as elegant. For him, the Copenhagen notion, with its mysterious appeal to ill-defined "measurements," is unwieldy and awkward. Plus, quantum mechanics works; it's much more than just equations on chalkboards, with actual technology like lasers, semiconductors, atomic clocks and MRI scanners to show for it. Carroll is also an ardent advocate for Everett's model, arguing the case in his 2019 book Something Deeply Hidden, in which he calls the theory's array of unseen universes "indisputably real." Max Tegmark, a physicist at MIT, expounded on the many worlds model in his 2014 book Our Mathematical Universe. Tegmark says he often thinks about the other copies of himself in those other worlds: "I feel a strong kinship with parallel Maxes, even though I never get to meet them," he writes. "They share my values, my feelings, my memories — they're closer to me than brothers." For Hossenfelder, however, those parallel Maxes are mere fiction, along with most conjecture about the multiverse. The problem, as she sees it, is that we take the equations too seriously, a position she details in her 2018 book, Lost in Math. Hossenfelder takes the view of an instrumentalist, a philosophical stance that says we should take a theory seriously only if it leads to verifiable, measurable predictions. In this view, Everett's theory offers a particular mathematical approach to quantum mechanics, but says nothing about what's really out there. In Hossenfelder's eyes, Albrecht and Carroll have made the mistake of thinking that the math behind the theory is real. Carroll vehemently disagrees. He argues — channeling Galileo — that mathematics is the language we use to describe our physical theories; it is not some extra, added ingredient. "No one looks at F= ma and goes, 'Oh, that's mathematics, I don't trust it, I'm going to stick to physics,' " Carroll says. For him, Newton's equation is obviously physics, and so is Schrödinger's. If Schrödinger's equation predicts the existence of many worlds, so be it. If we take Newton seriously, we should take Schrödinger seriously, too. Most physicists see Everett's many worlds as fundamentally different from those given by eternal inflation or by the landscape version of string theory. (Though a few theorists, including Susskind and Tegmark, have speculated that they may be connected.) Even so, the fact that several pathways seem to point to a multiverse suggests that the idea is worthy of such scrutiny. "Whether there's a multiverse or not does not hinge on any one theory being right or wrong," says veteran science writer Tom Siegfried, who examined the history of the multiverse idea in his 2019 book The Number of the Heavens. "There are different possible ways there could be a multiverse, and we don't know if any of them are correct. […] But we have reasons to take some of these ideas seriously." The way Hossenfelder sees it, having a basket of speculative theories is no better than having just one. In every case, we're asked to believe in the existence of universes that we can never see or study in any way. "I'm not saying it's wrong," she says. "I'm just saying it's no longer science." Competing Explanations The controversy may sound like harmless infighting among a small group of physicists. But in their 2014 Nature essay, Ellis and Silk argued that if physicists aren't careful in distinguishing speculative theories from established fact, the public could be led astray. Giving credit to such speculation could "open the door for pseudoscientists to claim that their ideas meet similar requirements." Or, as Columbia University physicist Peter Woit wrote on his blog, those who support the multiverse idea risk "turn[ing] fundamental physics into pseudo-science." For Nobel laureate physicist David Gross, invoking unseen universes to explain the properties of the one we actually see is a bit like invoking God. He once said that it "smells of religion and intelligent design." As scientists struggle to choose between competing explanations for what they observe, a ghost often appears in the battlements — not the ghost of King Hamlet, in this case, but that of William of Ockham. The 14th-century English churchman and philosopher is best known for Ockham's Razor, which suggests that simple explanations are better than more complicated ones. Taken at face value, Ockham's approach might appear to argue against the multiverse on the grounds that it carries excessive baggage (all of those unseen universes) when we just experience a single universe. For many physicists, the argument ends there. If simpler is better, why not stick with the universe we actually see? Except, explains Siegfried, Ockham did not merely say that simpler is better. Rather, in devising an explanation, it's desirable to use the fewest principles, even if they lead to complex results, Ockham argued. (It's no knock against, say, astrophysics, that it predicts billions of planets orbiting billions of stars.) Not only that, Ockham was actually pro-multiverse. "Ockham himself was the biggest advocate for the multiverse," Siegfried says. "He argued vigorously against all of Aristotle's objections to having more than one universe. So it's kind of ironic that people use Ockham's Razor to argue against the multiverse." In the movie Spider-Man: No Way Home, there's a playful scene in which today's web-spinner, played by Tom Holland, meets parallel-universe versions of himself from earlier films, played by actors Andrew Garfield and Tobey Maguire. This is, to be sure, straight-up fiction. We have no chance of ever actually seeing the universes described by eternal inflation, string theory, or the many worlds version of quantum mechanics. That also nixes the odds of ever encountering our other selves. The odds are similarly low that the debate over the multiverse will end soon. But history, according to Siegfried, suggests which way the wind is blowing: At one time, the only galaxy we knew of was the Milky Way; now we know that billions of other galaxies are scattered throughout the universe. Could a more expansive view of the universe itself be the next breakthrough? As Siegfried puts it: "Every time in the past that we've thought, 'We've got it; this is what the whole universe is' — the people who've said, 'Maybe there's more than one of those' have always turned out to be right." The Multiple Multiverses There isn't just one theory that suggests we live in a multiverse. In fact, physicists have found that several different ideas in particle physics and cosmology appear to point to the existence of universes beyond our own. Eternal inflation: This multiverse model presents a world where little "pocket universes" are continuously popping into existence. It stems from the idea of cosmological inflation, which posits that the universe went through a massive growth spurt in the first moments of the Big Bang. These pocket universes grew just as ours did, and might now contain stars, planets and galaxies like ours. String theory: In this theory, our universe is described as though made up of tiny, vibrating strings that are too small to detect. The equations of string theory have billions upon billions of solutions; some physicists believe this leads to a "landscape" of different universes. The idea may be closely related to eternal inflation. Many worlds: An attempt to explain a key aspect of quantum mechanics, the many worlds theory says that the universe splits each time a quantum measurement is made. This leads to an ever-growing array of universes within a branching multiverse. The model suggests that this multiverse contains multiple copies of you, as well. The Multiverse on Screen As the titular Doctor Strange, Benedict Cumberbatch flits between universes with ease. Mild-mannered laundromat owner Evelyn Wang (Michelle Yeoh) battles her own demons from various branches of the multiverse. Tom Holland's Peter Parker takes a spin with alternate versions of the character from prior movies. From Doctor Strange in the Multiverse of Madness and Spider-Man: No Way Home to Everything Everywhere All at Once, alternate universes are plentiful in movies today. But writers and filmmakers have been exploring the topic for nearly a century. On screen, some of these stories allude to physics; often, the parallel universes are merely imagined, playing out only in a character's head. But these tales all capitalize on what the multiverse offers — endless chances to imagine what could (or should) have happened if things went differently. Here is just a sampling of the many films that have toyed with the idea of multiple universes, and are well worth your time: It's a Wonderful Life (1946): In this Christmastime classic, George Bailey, contemplating suicide, tells his guardian angel, Clarence Odbody, that he wishes he'd never been born. But Odbody shows Bailey an alternative universe in which he had indeed never lived — and it's much worse. In the end (75-year-old spoiler alert), Bailey asks for his original life back. Run Lola Run and Sliding Doors (both from 1998): In both of these films, the central character experiences multiple timelines depending on how a specific moment unfolds. In Run Lola Run, it all hinges on what happens when Lola (Franka Potente) runs down the stairs of her apartment. In Sliding Doors, the timelines diverge depending on whether Helen Quilley (Gwyneth Paltrow) manages to board a London Underground train before the doors close. The One (2001): The multiverse figures prominently in this Jet Li action film, in which a rogue agent travels to parallel universes in order to kill other versions of himself. Coherence (2013): A reunion for a group of friends goes awry when a passing comet splits reality in two. Their only hope for survival is to hunt down their multiverse doppelgängers. Doctor Strange (2016): The film presents Doctor Strange's origin story — and at least pays lip service to modern physics. At one point The Ancient One (Tilda Swinton) says to Strange (Benedict Cumberbatch): "This universe is only one of an infinite number. Worlds without end… Who are you in this vast multiverse, Mr. Strange?" Spider-Man: Into the Spider-Verse (2018): This multi-dimensional take on the web-spinner features multiple Spideys from multiple Earths — including a version of the superhero as a talking cartoon pig. In 2019, it won the Academy Award for Best Animated Feature.
5 Scientific Discoveries From Girls Younger Than 12 Years Old
When it comes to scientific, archaeological and paleontological discoveries, girls really DO get it done! If a casual Google search is anything to go by, it sometimes seems like girls are making astonishing finds every day, advancing our knowledge of science, nature, the ancient world and so much more. In honor of the upcoming International Day of Women and Girls in Science (Feb. 11), here are a few of our favorite discoveries, and the girls who made them. 1. Molly and the Megalodon The most recent find comes courtesy of 9-year-old Molly Sampson. In 2022, the 4th-grader and her family were combing Calvert Beach in Maryland on Christmas morning, looking for shark teeth. Actually, Molly, who shares a love of fossils with her dad, was specifically looking for megalodon teeth, and Calvert Beach is a popular place to find them. But Molly got way more than she bargained for when she discovered a meg tooth as big as her hand, about five inches long. Even coming from an extinct shark whose name literally means "giant tooth," this was definitely an uncommonly large find (the biggest megalodon tooth known is only a couple of inches longer). Read More: The Mystery of the Megalodon and What Scientists Know The once-in-a-lifetime discovery was massive enough to make headlines around the world, when Molly confirmed her 15- to 20-million-year-old find at a local marine museum. The tooth remains in her private collection. 2. Saga's Sword Once upon a time, a little girl named Saga discovered a mysterious sword, hidden for centuries in the waters of a lake. Yeah, it does sound like the beginning of some ancient epic, but it happened for real in 2018. While on summer vacation at a lake near Tånnö, Sweden, Saga Vanecek, then 8 years old, plucked a long, rust- and sediment-covered object from the lakebed. It had been there for quite a while — as much as 1,500 years — but Saga instantly recognized it as a sword, still sheathed in what remained of a leather and wood scabbard. Originally believed to be a Viking-Age sword, archaeologists later determined that the blade is older, from around 400 to 500 A.D., during a time known as the Migration Period. Despite the Internet's near-universal wish that Saga should keep the sword and claim her rightful place as queen of Sweden, the artifact instead resides at a museum, not far from where it was found, under the care of conservators who will continue to preserve and study it. In lieu of a coronation, Sweden's national heritage board paid Saga a cash bounty of around $1,600. Read More: Meet 10 Women in Science Who Changed the World 3. Neshama's Egyptian Amulet Neshama Spielman was also just 8 years old when she and her family were doing some citizen science work by participating in the Temple Mount Sifting Project, an initiative to examine tons of dirt and debris that had been improperly excavated and moved without archaeological supervision, as required by law. Neshama was sifting debris in Jerusalem when she found a small object — part of an amulet that its long-ago owner would have worn around their neck. It wasn't until 2016 that archaeologists informed Neshama, and the rest of the world, that the amulet was Egyptian in origin, and more than 3,000 years old. Furthermore, the amulet bore the name of a pharaoh: Thutmose III, who ruled from 1479 to 1425 B.C., roughly the time when Jerusalem would have been under Egyptian rule. To see other surprising finds from the project, you can tour a virtual exhibition here.  4. Clara's Amazing Molecule For Kansas City, Missouri 5th-grader Clara Lazen, a mundane school experiment launched her into the heights of academia. Working with a kit that allowed her to build models of different kinds of molecules, Clara went freestyle, assembling a combination of carbon, nitrogen and oxygen atoms that left her teacher scratching his head: She had created a molecule he'd never seen before. The teacher shared the mystery configuration with his friend Robert Zoellner, a chemistry professor at California State University, Humboldt. Zoellner determined that Clara's molecule, tetranitratoxycarbon, was indeed new, and special enough that it warranted a scientific paper on its discovery. The resulting work appeared in 2012 in Computational and Theoretical Chemistry, with Clara earning a co-author credit on the paper. Not bad for a 10-year-old.  5. Mary's Fantastic Fossils Our last entry lived about 200 years before any of the other girls mentioned here. But she's a sentimental and historical favorite when it comes to recognizing girls who made amazing discoveries. Her name was Mary Anning. Born in 1799 in Lyme Regis, Dorset in England, Mary was a proto-paleontologist. Despite a limited education, Mary nevertheless made a name for herself as a fossil collector. She and her father would often comb the cliffs near her home, part of what is now known as the Jurassic Coast, for its richness of fossils from that period. It was common for locals to gather fossils or shells to sell as curiosities to tourists, and Mary's family did likewise to support themselves. Read More: The Unsung Heroes of Science But Mary was particularly gifted at identifying and carefully extracting choice fossils from the limestone and shale cliffs. As a child in 1811, Mary's first major find was of an ichthyosaur skeleton, one of the earliest specimens ever found and arguably the finest of its kind. She also made important early discoveries of plesiosaur and pterosaur fossils. Within her lifetime, Mary became known throughout Europe for her work. And while many geologists of the day were her regular customers, she endured a notable lack of recognition from the scientific community. Like all women of the era, she was barred from joining or even attending meetings of the prestigious Geological Society. However, towards the end of her life (Mary died of breast cancer in 1847) the Society belatedly acknowledged her contributions to science and donated money to support her. That was awfully big of them, considering how many of its members benefitted from her discoveries. But it's not like they erected a statue of her or anything. Instead, that task would fall to another remarkable girl, more than 150 years later. In 2018, 11-year-old Evie Swire began a fund-raising campaign for a bronze statue of Mary Anning. After raising more than $100,000, the Anning statue was finally created and unveiled in Lyme Regis in May of 2022. Well done, Evie!
Earth's Population Has Hit 8 Billion People, But There's Still Hope For Us Yet
Is this article about ESG?
In 1798, Thomas Malthus, an English economist and demographer, published "An Essay on the Principle of Population," in which he predicted that human population growth would eventually exceed the Earth's ability to provide enough food for everyone. This would lead to famine, disease, war and other associated travails. So far, that hasn't happened. In 1968, 170 years later, Paul Ehrlich published a book titled, The Population Bomb, another doomsaying work predicting that human fecundity would soon drain the planet's resources and send Earthlings into a death spiral. Widespread starvation, Ehrlich argued, was both inevitable and imminent. But that bomb hasn't gone off either. Read More: We've Been Worried About Overpopulation for Millennia This past November, the U.N. announced that Earth's population had reached eight billion. And we're still here. On top of that, according to the World Bank, the number of people living in extreme poverty has been steadily decreasing over the last three decades. However, the pandemic reversed the trend somewhat — temporarily, we hope. Yet eight billion hungry souls — hungry not just for food but for housing, clothing, computers and other resource-intensive necessities — are straining the planet's resources. We're talking not only about running out of the basic necessities of life but about the combined effects of humans on the environment. The more of us there are, the more we take from the environment. Read More: The Domino Effects of a Global Food Shortage Population Strain As the number of humans is increasing, wildlife is decreasing. According to the United Nations Global Resources Outlook 2019, resource use has more than tripled since 1970, including a 45 percent increase in the use of fossil fuels. Or as the U.N. puts it on their Act Now website: "We are using the equivalent of 1.6 Earths to maintain our current way of life, and ecosystems cannot keep up with our demands." Our planet is currently headed toward catastrophe. If we don't make any changes, the looming crisis so many have predicted and are still predicting will almost certainly occur. However, reaching this population milestone may not be what tips the scales. Eight billion souls bring hazard but also hope. "A Grand Success" According to Rachel Snow, Chief of the Population and Development Branch at the United Nations Population Fund, reaching the eight billion mark, rather than a catastrophe, is a "grand success." Making it to a world population of eight billion means that life expectancy, healthcare, rates of education and standards of living are improving worldwide, and not just in wealthy nations. And population growth is slowing. The pace of population growth peaked in 1964 and has been trending downward ever since. At one point, the population was expected to reach 11 billion by 2100, but based on current projections, the population is expected to plateau in the coming decades and reach only 10 billion by 2100, when it may begin to decline. More than 50 countries are already in population decline. Still, says Snow, eight billion and counting could be a catastrophe if governments can't prepare for what's coming in the next few decades. Narrowly avoiding a Malthusian and environmental disaster will require determination, commitment and ingenuity. And ingenuity is where those eight billion come in. Reaching the eight billion mark means "eight billion people who will develop unique, original, creative ideas to make the world a better place," she says. Read More: Combatting Fast Fashion: How Compostable Clothing Can Help The Environment If you look just at the numbers, eight billion is terrifying. But it's not just a matter of numbers. If it were, Malthus' predictions would have long since come true. It's a matter of how well we manage resources and how well we care for the planet. It's a matter of how well we develop ideas for correcting the damage of the past and protecting the future. We have a lot of work to do if humanity is to survive. But we have eight billion people to help do it.
The Fascinating World of Neanderthal Diet, Language and Other Behaviors
The Neanderthals represent the richest, most robust and most studied species in the hominin record, other than our own. And thanks to the wealth of available specimens — including their remains, tools, trash, and many more traces of their activities — scientists are piecing together a picture of their basic behavior, bit by bit. From the unique diet of the Neanderthal to the advanced language ability and communication skill, the picture that they're producing is far from primitive. In fact, though the Neanderthals were a solitary species before they disappeared, sticking to themselves and a couple close companions, they were also accomplished and adaptable, with behavior traits that allowed them to weather some of the coldest conditions that the world has yet seen. Read More: How Humans Survived the Ice Age Among their most adaptive behaviors were their acquisition of food, manufacture of tools and articulation of ideas through speech and symbols. Where Did the Neanderthals Live? In the terrains of Africa around 400,000 years ago (or maybe as many as 800,000 years ago), an ancient population of hominins started to split apart, forever changing the course of human history. While one portion of this population stayed put, the other trudged to Europe and settled there, initiating a period of geographic isolation in which the two groups accumulated their own genetic traits gradually, generation after generation. Read More: Who Were the Neanderthals? Over time, the two groups turned into two separate species, with Homo sapiens arising in Africa and Homo neanderthalensis appearing in Europe. And it was there that these so-called Neanderthals would contend with the impossibly cold conditions of the Ice Age, adapting to the temperatures by becoming shorter, broader and bigger-brained. What Were Neanderthal Behavior Traits? Armed with these adaptations, the Neanderthals thrived for thousands of years, producing an ample record of their activities throughout that time. And more than transmitting their genetic material to the genomes of many modern individuals, they also left many material traces from their lives, allowing archaeologists and anthropologists to speculate about their behavior. Overall, scientists suspect that the Neanderthals behaved in an isolated, insular way, though they also showed adaptability and intelligence in several areas. Targeting an array of prey animals according to the season, they made and manipulated an assortment of tools and probably produced simple speech. Not only that, but they also participated in symbolic behaviors, dabbling in art, personal adornment and ritual burial, according to some scientists. Read More: Debunking the Myth of Homo Sapiens Superiority Neanderthal Society Archaeologists tend to agree that the Neanderthals occupied open settlements or took shelter from the cold in caves, cycling through a couple of separate settlements according to the time of year. In these sites, they typically resided alongside 12 to 25 relatives. Though these tribes usually stuck to themselves, they weren't wholly isolated. Studies suggest that they probably interacted with 10 to 20 neighboring troops, and sometimes as many as 50, with whom they shared social identities and maintained associations for mating, manufacturing and collective coping in times of trouble. The social organization of these tribes is still stuck in the shadows, though some genetic studies state that the females pursued partners in neighboring troops in an attempt to avoid inbreeding. And while some sites show the telltale signs of treatment for the sick and injured, so, too, appear the traces of intraspecies violence, suggesting a complexity of social interaction that's similar to our own. Neanderthal Diet Anatomically, the Neanderthals were omnivores, though scientists suspect that they consumed more meat than plants thanks to the reduced availability of flora in their cold climate. In fact, the chemical composition of several Neanderthal skeletons substantiates this, showing scientists that the average Neanderthal diet consisted of meat, meat and more meat (with the addition of plant material only occasionally). Read More: Neanderthals Were Probably Carnivorous, According To A Fossilized Tooth As such, the Neanderthals played the part of an apex predator, targeting species according to the seasons. Munching on reindeer in the winter and red deer in the summer, the Neanderthals also ate aurochs, mammoths and boars — among other animals — though they weren't always as widely available. Fans of flavor, the Neanderthals applied an assortment of tricks to make their meals tastier, pounding, crushing and cooking their food over fires prior to consumption. And though archaeologists aren't absolutely certain whether the Neanderthals manufactured these fires themselves, the species frequently manipulated flames, according to the piles of ash in many of their settlements. Neanderthal Language Some scientists say that the sophistication of these tools testifies to the Neanderthals' astute observational abilities, while others think that their toolmaking was too specialized to share and spread without words and sentences. That said, whether the language of the Neanderthal was necessary to make and manipulate these tools or not, studies do demonstrate a shared neurological basis for toolmaking and speech. Ultimately, while scientists still struggle to pinpoint the particulars of Neanderthal language and speech, anatomical and genetic analyses suggest that they possessed auditory and speech abilities similar to ours. Neanderthal Rituals Neanderthals weren't constrained to verbal communication. Whether or not they spoke, archaeologists speculate that they also articulated themselves symbolically, creating a material culture of art and adornment. Scratching the walls of their caves with spots, slashes and other abstractions and splashing them with paints and pigments, the Neanderthals also decorated themselves with beads, bones and shells and collected an assortment of unusual articles, such as crystals and animal skulls, which they stashed in their settlements. Read More: Neanderthals May Have Used Animal Skulls as Decor Some scientists add that the Neanderthal's tendency to deliberately bury their dead represents their symbolic thinking, too. And though there's no single burial that's universally interpreted as an instance of symbolism, the analysis of pollen particles at some sites suggests that the Neanderthals did decorate their dead with flowers, such as yarrow and bachelor's button, before burial. Neanderthal Tools One of the clearest signs of their intelligence, Neanderthal toolmaking centered around the creation of sophisticated stone flakes (though they fashioned tools out of other materials, too). To form these flakes, the innovative Neanderthal selected a small lump of stone and struck slivers off the sides until it took the shape of a shell — flat on one side and spherical on the other. They then smashed the top of the stone several times over, hacking off a series of similarly sized slices, which they then wielded as tools. The Neanderthals used some of these flakes without any added modification, though they turned some into points, spears, scrapers, awls and axes — among types of tools — for a wider assortment of applications. For instance, though they thrust or threw their stone-tipped spears into their prey, they selected scrapers and awls to prepare and punch holes in hides, which they then tied together with torn animal tissues to create a simple form of clothing. What Happened to the Neanderthals? Despite all their advanced behaviors, the Neanderthals sustained small populations that made them more susceptible to obstacles such as climate change and competition. Read More: Why Did Neanderthals Disappear? In fact, though it's a popular theory that the Neanderthals were wiped away around 40,000 years ago when their close cousins from Africa — our own species — started streaming into their European territories, there's not much in the archaeological record to indicate that the Neanderthals disappeared due to interspecies violence alone. Instead, a confluence of factors probably played a part in the extinction of the species, with small population sizes, sicknesses, worsening climate conditions and interspecies competition and assimilation, all contributing to their disappearance in different areas and times. These findings challenge the previously held views of this species as primitive beings. Understanding their unique capability for language and their sophisticated use of tools to maintain their diet underscores the importance of continued study and excavation of archeological sites.
What the Stonehenge Builders Liked to Eat
Stonehenge is probably the world's most renowned henge – the name given to prehistoric stone or wooden circles. It's thought that the site's builders gathered close by at a settlement called Durrington Walls, which dates back to around 2500 B.C. Found around two miles from Stonehenge, archaeological studies have revealed what its inhabitants used to feast on. Where's The Pork? Researchers gleaned insights from a massive haul of animal bones found at Durrington Walls, explains Umberto Albarella, a zooarchaeologist with the University of Sheffield. Such studies have shed light on the behavior of the inhabitants of Durrington Walls — the presumed builders of Stonehenge — and are helping archaeologists reconstruct what their daily life may have looked like. "The most remarkable thing is that most of the animal remains came from pigs," Albarella says, adding that pigs made up around 90 percent of the bones. Pork then was particularly plentiful and seemingly popular. Cattle provided another source of sustenance, while some evidence of the consumption of aurochs, an ancient wild ancestor of cows, was found. It's pretty certain that the pigs and cattle were domestic animals, Albarella adds.  Read More: How Hunter-Gatherers Used The Land Around Stonehenge Durrington Walls Was A Gathering Place Researchers believe there was a permanent settlement at Durrington Walls, but it also drew in people from across the region for ceremonies and seasonal celebrations. Evidence shows that these travelers brought livestock with them from far and wide to provide meat for feasts. "We can also envisage a situation in which it looks like there were big pig roasts," Albarella says. "There is also quite a lot of evidence of fires at the site." Furthermore, according to research, some of the pigs were killed at a young age, suggesting planned and seasonal consumption. Read More: Stonehenge May Be an Ancient Solar Calendar Last year, another study offered yet more evidence of eating habits at Durrington Walls, thanks to ancient parasites found in human and dog feces. These parasites – or coprolites – yielded clues that previous archaeological finds had not. Analyzing preserved feces showed that "individuals had been eating the internal organs of the cattle," says Piers Mitchell, a professor at the Department of Archaeology at the University of Cambridge and lead author of the study. Given the presence of the parasites, the innards were probably undercooked. "They weren't just eating the steak bits… they were eating the whole lot." Read More: Medieval Friars Were Filled With Worms Durrington Walls Was Also A Meating Place Curiously coprolites related to freshwater fish were found in one of the dog samples, but where they came from is uncertain. "Now that means that dog presumably ate some raw freshwater fish and then got infected," Mitchell says. Because some people were only at Stonehenge for short periods of time, it's possible that the dog in question ate it elsewhere, he continued, as there's no other evidence that the builders of Stonehenge consumed fish during their festivals. While some evidence of the consumption of fruits and seeds – from apples, cherries and other wild fruits – was found at the site, it appears that the Durrington Walls peoples preferred meat. "There isn't much in terms of plants. So, it looks like it must have been quite a heavy meat diet," Albarella continues.  Albarella sees these kinds of studies as holding even more significance than just shedding light on Neolithic eating habits. "There is so much evidence in the past, both in history and archaeology, of war and people fighting each other," he says, adding that Stonehenge and the site of Durrington Walls, however, were clearly sites where people came from far and wide to participate in rituals and feasts. "It's nice to see that we can reconstruct something that brought people together rather than against one another."  
Will We Ever Figure Out How to Defy Gravity?
Is this article about Tech?
The gravitational force is by far the weakest of the four forces of nature. It's simple to defy gravity: just lift something in the air. But the annoying thing about gravity is that it's both persistent and has an infinite range, which takes a surprising amount of work to overcome. Gravity is so weak that even if it were a billion times stronger than it is now, it would still be the weakest of all the forces. The whole mass of the Earth is pulling on you, but you can reach over and grab a pencil and overcome that entire gravitational might. Naturally, eventually you'll get tired and put the pencil down, but we do have other methods to magically overcome the strength of gravity. Read More: What is the Fifth Force? Defying Gravity in Space The magnetic force can keep an object suspended on the side of your fridge for eternity. Even stronger magnets, using superconductors, can levitate entire train cars, enabling super fast transportation that floats above the track. Going further, it's not all that difficult to defy gravity and get into space. After all, the edge of space is just 100 kilometers (or about 62 miles) away and shooting something straight up for that distance is not the hardest thing to do in the world. But gravity does have a superpower. Even at the edge of the atmosphere and the beginnings of space, Earth's gravitational pull isn't much weaker than what it is on the surface. So, unless you keep accelerating, eventually gravity will pull you back. Most of the energy that we put into rockets doesn't go into getting to space, it goes into staying in space. If you want to escape the gravitational clutches of the Earth altogether, you must achieve a speed of at least 11.2 kilometers (or about 7 miles) per second, which is around 33 times the speed of sound. Read More: How to Understand Einstein's Theory of Gravity Once in space, we have some methods available to simulate the effects of gravity. This is important because a constant gravitational pull is vitally important for maintaining healthy bodies. Without gravity, our hearts grow weaker, our bones get thinner, and our entire cardiovascular system diminishes. Without constant exercise, astronauts who spend too much time in Zero-G couldn't survive a return to Earth. Engineers have developed concepts for rotating space habitats to recreate the effects of gravity. Instead of a massive object on the Earth pulling on you, the wall of a spacecraft would spin and fling you against the outer wall. The centrifugal force would give you the exact same sensation of gravity that you have on our planet, saving you from the ravaging effects of Zero-G. All that technology still lies in the world of science fiction, but space agencies around the world are interested in developing such habitats for long-term missions in space. Anti-Gravity Devices and Dark Energy Speaking of science fiction, writers and authors love to come up with all sorts of gravity-defying gizmos, whether to provide artificial gravity for their ships or to propel their spacecraft through the universe. Unfortunately, it seems that these kinds of anti-gravity devices will remain in the realm of fiction. To operate these devices, it would require the use of negative matter, which is a form of matter with negative mass (not to be confused with antimatter, which is like normal matter but with opposite charge). We have never observed negative matter in the universe, and we strongly suspect it can never exist, because it would violate our understanding of the conservation of momentum, which is a pretty big deal. However, at the largest scales in space, we already observe an anti-gravity effect. We've known since the observations of Edwin Hubble, about a hundred years ago, that our universe is expanding – over time, the average distance between galaxies grows. But in the late 1990s, two independent teams of astronomers discovered something remarkable: Not only is the universe expanding, but that expansion is accelerating. The universe is expanding faster and faster every single day. Read More: The Universe May Be More Unstable Than You Think The name we give to this phenomenon is dark energy, and it appears to be an anti-gravity force that is repelling all the matter in the universe. Anti-gravity actually isn't all that strange in Einstein's general theory of relativity, which is the set of equations we use to understand how gravity works. In general relativity, any kind of tension, like the tension in a stretched rubber band, creates an anti-gravity effect. But usually this anti-gravity effect is completely swamped by the normal, attractive gravity that we're used t.
Scientists Find Success in Creating Lab-Grown Blood
Is this article about Pharma?
Researchers in the U.K. have achieved something of a world first: they have manufactured blood in the lab, which they've since administered to humans. The clinical trial will aim to test the safety and effectiveness of the lab-made blood in at least 10 healthy people. Two volunteers have already received a dose. The scientists — from the University of Cambridge, the National Health Service and the University of Bristol — are keen to find out whether their novel blood can last as long as normal red blood cells (which normally stay alive for about 120 days inside the human body) and whether there are any side effects. Blood Donations Transfusing donated blood has saved countless lives, allowing patients to get through complicated operations in good health. Blood products also help to treat chronic conditions such as sickle cell anemia. But blood donation, as a system, has many drawbacks. It requires a complicated infrastructure to collect and deliver blood where it's needed safely. That requires adequate refrigeration all along the route, and while that might be relatively straightforward for developed countries, it remains a challenge elsewhere in the world. Rarer blood types also suffer from dwindling supplies in the blood banks, which often means it's harder to find a suitable blood match for certain racial and ethnic groups. It's also costly to maintain the infrastructure; the average donation of less than half a liter of blood costs the U.K.'s National Health Service approximately £130 ($155). Read More: What Blood Types Can Reveal About Our Health That's why scientists from around the world, often funded by military agencies, have been searching for more practical alternatives for decades. Still, it's an endeavor that has thus far enjoyed limited success. "After 9/11, the U.S. Army invested millions of dollars in producing a blood replicant to be used for casualties in the battlefield, but it came to naught," says Lt. Col. Matthew Armstrong, who studies the fluid dynamics of blood at the United States Military Academy at West Point. Blood Transfusions to Lab-Grown Blood Back in the 1600s, doctors tried transfusing milk and wine into their bleeding patients — needless to say, it didn't go well. Then they moved on to using sheep's blood. That didn't work, either. Physicians eventually realized that human blood from a donor (with the same blood type as the recipient) is needed. A British doctor finally performed the first successful transfusion in 1818. By World War II, the blood donation infrastructure was sufficiently developed to transport large volumes of blood across great distances to reach those in need. In 1940, before the U.S. formally entered the fray, the "Plasma for Britain" campaign shipped 13 million units of blood from the U.S. to those in the U.K. who were already fighting the Nazis. Then, following the war, attention turned towards finding more reliable, lab-made alternatives to traditional blood. One approach has been to synthesize artificial substances that perform the same oxygen-carrying role as blood. Another line of attack has been to harness the power of stem cells to make real blood cells. This latest study is the most recent victory for those in the stem cell camp.  The researchers started with blood donations and used specialized magnets to isolate stem cells, which were then transferred into a lab environment. Here, the scientists ensured the ideal conditions to make these stem cells proliferate into large numbers of red blood cells. It took about three weeks to grow 15 billion red blood cells; that might sound like a lot, but healthy adults have between 3.92 and 5.65 million red cells in just a microliter (one-millionth of a liter) of their blood. Although the clinical trial is only using minimal amounts of lab-grown blood for now, the researchers hope to one day cultivate large volumes for people with rare blood types. It will be a challenge, however, to scale up this technology in a cost-effective way.
Pliny The Elder's Radical Idea To Catalog Knowledge
Among the achievements of the ancient Roman Empire still acclaimed today, historians list things like aqueducts, roads, legal theory, exceptional architecture and the spread of Latin as the language of intellect (along with the Latin alphabet, memorialized nowadays in many popular typefaces). Rome was not known, though, for substantially advancing basic science. But in the realm of articulating and preserving current knowledge about nature, one Roman surpassed all others. He was the polymath Gaius Plinius Secundus, aka Pliny the Elder, the original compiler of scientific knowledge by reviewing previously published works. If he were immortal, Pliny would be celebrating his 2,000th birthday this year. Nobody knows his exact date of birth, but we can infer the year 23 CE because his nephew reported how old he was when he died. His death was on August 25, 79 CE, a date established by an unfortunate event associated with a volcano. Pliny was like a Renaissance man a millennium and a half before the Renaissance. Apart from his Roman Empire obligations as a military commander and provincial governmental official, he was a student of law, language, history, geography and every single branch of natural science. An indefatigable worker of intense curiosity about everything, he disdained sleep because it kept him from his tasks, and hated walking, because he could not walk and write at the same time. His Natural History, a 37-volume masterpiece of high literary quality yet immense factual density, attempted to record and systematize the totality of human knowledge about nature. He reviewed hundreds of ancient texts by the most illustrious authors in all scientific fields, extracting from them thousands of specific facts to preserve for posterity. As the late classicist David Eichholz wrote, Pliny's motivation was "his anxiety to save the science of past ages from the forgetful indifference of the present." Pliny was born in Como, Italy, into a family of sufficient stature that he was educated in Rome and then pursued a military career, including service as commander of a cavalry squadron in Germany. During that time, he wrote a history of Roman military activity in that region, after first composing a now lost treatise on how best to throw a javelin. By about 58 CE, Pliny had returned to Rome, where he devoted his writing to grammar and rhetoric and maybe practiced law. He avoided governmental involvement for years, probably because he was no friend of the crazy emperor, Nero. But he was friendly with Vespasian, who became emperor in 69. Pliny soon assumed governmental positions in Roman provinces in Spain, France and possibly Africa. All along, Pliny read voraciously (or had books read aloud to him). He collected fact after fact about the natural world, with the aim of compiling a comprehensive account of all the knowledge about nature that those before him had accumulated. Nobody else had ever produced anything so encyclopedic about natural science. (In fact, the very concept of "encyclopedia" was unknown at the time.) He published it in 77 CE, two years before the eruption of Mount Vesuvius. Pliny was then commanding a fleet of Roman ships that sailed to the vicinity of the volcano, perhaps because of Pliny's curiosity or possibly on a rescue mission. Tradition said Pliny died from inhaling toxic volcanic fumes, although some historians suspect he just had a heart attack. Pliny began Book 1 of his Natural History  with a dedication to the emperor Titus (Vespasian's son) and an itemization of what was to follow. First came a book on the universe, heavenly bodies and the elements, followed by several books on the geography of the Earth and its inhabitants. Book 7 discussed man and his inventions. Then came the animals (land and sea), and then one book each on birds and insects. Many volumes followed on various aspects of plants, trees, flowers and fruits, and their cultivation. Botanical themes continued in several books on the use of plant products in medicine. Next came more medicine, with commentary on medicinal substances derived from animals. Pliny finished with five books on metals and minerals, including their role in painting, providing the earliest detailed account of the history of art. Pliny's emphasis on facts obscured an underlying philosophy about the universe and humankind's place in it. His approach was not to defend any philosophy, but to discuss nature factually. That meant, as the classics historian Aude Doody wrote, "knowing that six European trees produce pitch, that there are three kinds of lettuce, that the best kind of emeralds come from Scythia." Yet Pliny's presentation was nevertheless infected with a deeply held belief that the universe existed to serve humankind. As Doody noted, Pliny believed that nature is "a conscious, creative power, who deliberately organizes the world with the needs of humanity in mind." That view reflected the philosophy of the Stoics, popular in those days, that the cosmos was infused with a powerful cohesive force, or pneuma, which unites everything that exists and determines matter's properties. "The whole of nature is animated by a providential presence that directs it, and this divine power can be identified both with nature and with the world itself," Doody commented. Which is what made comprehending all of nature so important for Pliny. Pliny's books served as an authoritative source of information about nature for centuries. "The Natural History continued to be used as a practical source of medical and scientific knowledge right into the 16th century," Doody commented. Today it remains a useful resource for scholars studying ancient knowledge and, in fact, is still sometimes cited in scientific papers today. In the 2020 Annual Review of Cell and Developmental Biology, for instance, Sarah M. Mohr and colleagues cite Pliny as one of the earliest authors to describe hibernation. And bioluminescence, a hot research topic in the 21st century, was first reported (in scyphozoans) by Pliny, as Steven H.D. Haddock and coauthors reported in the 2010 Annual Review of Marine Science. Yet for all its benefits, Pliny's Natural History had one serious drawback. It was full of errors. Pliny pretty much believed everything he read from ancient authorities, and essentially retweeted it all without any fact checking. His book on land animals includes the mythical monoceros or unicorn, a "very fierce animal," he wrote, with "a single black horn which projects from the middle of its forehead." (It's not a rhinoceros — he describes that beast elsewhere.)  And he mentions the legendary Ethiopian animal called the catoblepas, deadly to the human race, "for all who behold its eyes, fall dead upon the spot." (He might better have titled his animal book Fantastic Beasts and Where to Find Them. And yes, he describes the basilisk, which can also kill by sight, and destroys plants by its touch or even its breath.)On the other hand, Pliny did occasionally express skepticism and he rejected some outrageous claims. For one, he dismissed the idea of immortality. Had he been wrong, there would be a serious fire hazard at his birthday party this year. 10.1146/knowable-020223-1 Tom Siegfried is a science journalist in Avon, Ohio. His latest book, The Number of the Heavens, about the history of the multiverse, was published in September by Harvard University Press. This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. Read the original here.
Neanderthals Hunted and Ate Straight-Tusked Elephants
While it's already known that Neanderthals were skilled hunter-gatherers, new evidence suggests that they decided to hunt and eat some of the biggest animals of their time period.  A new study published in Science Advances by a team of researchers from Germany suggests that Neanderthals hunted and ate straight-tusked elephants. Straight-tusked elephants were the largest land animals of the Pleistocene epoch and roamed Europe and Asia between 800,000 and 100,000 years ago.  Skeletal Analysis of Straight-Tusked Elephants  In the 1980s, coal miners near Neumark-Nord, Germany, discovered a plethora of stone tools and animal bones. In 1985, archaeologists began evaluating the find, spending over a decade on the research.   Read More: Neanderthals Thrived for 400,000 Years, but Then Disappeared However, it wasn't until recently that researchers uncovered marks and scratches on nearly all of the 3,400 straight-tusked elephant bones included in the find.   Some of the bones were so heavy that researchers required a forklift to move them. According to archaeozoologist and lead study author Sabine Gaudzinski-Windheuser in an interview with Science, almost every bone shows signs of butchery, especially under a microscope.   These results mean that the Neanderthals who hunted these elephants were extremely thorough, according to researchers. They didn't leave behind much meat on the carcasses if any at all. Researchers were able to determine this due to a lack of bite marks on any of the bones, which would have indicated that scavenger animals, such as hyenas, were chewing on the remains.  Other important findings in the study include intense cuts on multiple elephant skulls and mandibles, indicating that the group of Neanderthals exploited nearly every body part of the hunted elephant.   Read More: Neanderthals May Have Used Animal Skulls as Decor Larger Implications for Neanderthals  Given the vast amount of bones found at the site, and the marks found on them, it's already evident that Neanderthals hunted straight-tusked elephants for food. It's important to note just how big the elephants were. Adult male straight-tusked elephants could weigh up to 13 metric tons (over 28,000 pounds), which is roughly twice as large as modern-day adult male African elephants.  Researchers estimate that the meat from one straight-tusked elephant would have been enough to feed 350 people over a week or 100 people over a month.   Given this estimation and the fact that archeologists recovered so many elephant bones, researchers say it's fair to assume that Neanderthals sometimes moved in groups larger than previously thought.   However, that may not have always been the case. While taking down an adult straight-tusked elephant likely took lots of planning and effort, it's unclear whether Neanderthals remained in large groups for long periods of time or split off and separated once after the hunt.  Regardless, it's possible that because one group of Neanderthals in Europe was able to hunt down these animals, other groups of Neanderthals were able to as well, making our ancestors even more complex than what was once assumed.  Read More: Which Animals Did Early Humans Mainly Hunt?
How Hunter-Gatherers Used The Land Around Stonehenge
In the late 1950s, a Dutch archeologist visited Stonehenge, the prehistoric monument in southern England. The massive stone circle wouldn't be designated a UNESCO World Heritage Site for another three decades, and there weren't swarms of tourists or a protective fence. The archeologist was the only one around that day. He parked his car on the side of the road and walked up to the massive stone circle. The area seemed remote, almost abandoned. Scientists now know that when Stonehenge was first built thousands of years ago, it was not isolated. It was part of an area with other monuments, and scientists believe it buzzed with construction and other activity. But what about the time before Stonehenge? Was the area wooded? Partially wooded? Or an expansive space ideal for monument building? And what did that mean for the hunter-gatherer societies that once called the area home? Understanding an Icon Many historians and archeologists consider Stonehenge an "icon" of British history. Despite its legendary status, much about the memorial has been misunderstood. Because it currently stands alone in open fields, many people long assumed it was always set apart from human activity. Stonehenge was built in what is now Salisbury Plain in Wiltshire, England. Scientists have come to understand that Stonehenge was constructed in stages around 4,000-5,000 years ago, and it was one of several other monuments in the vicinity. The other monuments were made from less durable materials, such as timber, and they decayed after several centuries. Some, such as a structure built near the River Avon, were also made of stone but were dismantled by prehistoric people within centuries of their construction. Stonehenge has long been the last monument standing, and it has captured imaginations for thousands of years. In the twelfth century, History of the Kings of Britain described Stonehenge's origins and claimed the stones came from Ireland and became part of a monument intended to honor the Britons killed by Saxons. Scientists now call this legend "fantasy," partly because the Saxons didn't arrive until millennia later.   Although scientists can now separate Stonehenge fact from fiction, many unanswered questions remain. But there appears to be a promising answer for a question researchers have long considered pressing — what was the area like for ancient hunters and gatherers before the famed monument was erected?  Land Before Time   According to a 2022 study in Plos One, researchers were able to get a better idea of what the land around Stonehenge likely looked like while hunter-gatherer societies still lived in the area.  To learn more about life pre-Stonehenge, a group of researchers reconstructed the environmental conditions. They examined animal remains, sedimentary DNA and other preserved ancient samples such as pollen and spores. They also used optically stimulated luminescence (OSL) and radiocarbon dating to determine what the land resembled from 5,500 B.C. to 4,000 B.C. One of the researchers' primary interests was determining whether the area was open and conducive to animal grazing (like it is now), partially wooded or as previous studies had suggested, entirely covered in a canopy forest. By reconstructing the environmental conditions, the researchers found the hunter-gatherers in the late Mesolithic period would have most likely been in a partially-wooded area. Their analysis found the area was in a clearing of deciduous woodland inhabited for thousands of years by deer, cattle and the early people who hunted them. This means the earliest-known farmers in the area were using partially-open habitats that had already been used by earlier human populations. Thus, the land was long valuable to inhabitants before Stonehenge was a thing. Stonehenge was then constructed in a space that was already partially cleared.  These partially cleared areas may have been ideal grazing grounds for larger herbivores such as red deer, wild boar and aurochs  —  an extinct bovine species. Ancient bones from aurochs were found near Stonehenge, along with fish bones, which likely contributed to the diet of hunter-gatherers. According to the study, this area may have served as a home base for hunter-gatherers. The finding makes the monument even more valuable to researchers who want to learn more about the interactions between early farmers in the region and the hunter-gatherer societies they eventually replaced. 
5 Famous Scientists That Made Their First Discoveries at a Young Age
It was a long-held belief in the scientific community that only younger scientists made significant advances. Having developed his theory of relativity at age 26, Einstein said, "A person who has not made his great contribution to science before the age of 30 will never do so." Although there have been many noted scientists in their 40s and beyond, these are five who accomplished their important discoveries at a young age.  1. Lawrence Bragg (1890-1971) At 25 years old, Australian scientist Lawrence Bragg became the youngest person ever to receive a Nobel Prize, although his groundbreaking research began when he was only 22. In 1915, he and his father were jointly awarded the Nobel Prize in physics for their work in X-ray crystallography. His research revealed that X-rays are diffracted based on the crystal's atomic structure. The area of X-ray crystallography continues to impact the fields of chemistry, physics and medicine —and is still used to study crystalline atoms. Building on Bragg's work, other scientists have been awarded Nobel Prizes using x-ray crystallography in their research.  2. Subrahmanyan Chandrasekhar (1910-1995) You may not know the name, but Subrahmanyam Chandrasekhar was a brilliant Indian-American scientist. An astrophysicist, his research at 19 years old eventually led to a Nobel Prize many years later, in 1983. During his research, it was known that stars eventually became unstable, collapsed in on themselves, and became what is known as a "white dwarf." Chandrasekhar discovered that white dwarfs must have a mass of 1.4 or less of the sun's mass. They become a neutron star or a black hole if it's higher. The point at which a star's end product is determined is called "The Chandrasekhar Limit."  3. Galileo Galilei (1564-1642) Galileo contributed much to the world of science; one of his most important achievements was improving the telescope and becoming the first person to study space with it. This led to his discovery that craters existed on the moon. But one of his most important scientific contributions occurred when he was 19. Galileo developed an accurate method to weigh the density of objects using a counterweight. His discovery — hydrostatic balance, has current applications in astrophysics, gemology and atmospheric modeling.  Read More: Yes, Galileo Actually Said That  4. Sir Isaac Newton (1643-1727) Although best known for the concept of gravity — which he discovered at only 23 years old — Sir Isaac Newton impacted scientific innovation in additional ways. At 19, he developed calculus, which was a new approach to math at the time. And in his mid-20s, he had two major scientific breakthroughs. He invented the first reflecting telescope, using a mirror. He also significantly advanced the field of physical optics. Using a prism, he ascertained the composition of white light (sunlight) and discovered the color "spectrum," — a word he created. Newton's book Principia, which detailed the three laws of motion, is considered one of the greatest scientific works ever written. 5. Blaise Pascal (1623-1662) Known to be a prodigy in math and science, Blaise Pascal showed remarkable intellect from early childhood. At 16 years old, he published a mathematical treatise. A few short years later — to aid his father's work as a tax collector — Pascal created the first successful mechanical calculator. He called it the "Pascaline," which was a predecessor to what eventually became the modern computer. He was also instrumental in fluid mechanics, creating "Pascal's Law." His research in this area led him to create the syringe and establish the foundation for the hydraulic press.
Scientific Highs And Lows Of Cannabinoids
Leo has found 5 Regulatory Changes mentions in this article
The 1960s was a big decade for cannabis: Images of flower power, the summer of love and Woodstock wouldn't be complete without a joint hanging from someone's mouth. Yet in the early '60s, scientists knew surprisingly little about the plant. When Raphael Mechoulam, then a young chemist in his 30s at Israel's Weizmann Institute of Science, went looking for interesting natural products to investigate, he saw an enticing gap in knowledge about the hippie weed: The chemical structure of its active ingredients hadn't been worked out. Mechoulam set to work. The first hurdle was simply getting hold of some cannabis, given that it was illegal. "I was lucky," Mechoulam recounts in a personal chronicle of his life's work, published this month in the Annual Review of Pharmacology and Toxicology. "The administrative head of my Institute knew a police officer. … I just went to Police headquarters, had a cup of coffee with the policeman in charge of the storage of illicit drugs, and got 5 kg of confiscated hashish, presumably smuggled from Lebanon." By 1964, Mechoulam and his colleagues had determined, for the first time, the full structure of both delta-9-tetrahydrocannabinol, better known to the world as THC (responsible for marijuana's psychoactive "high") and cannabidiol, or CBD. That chemistry coup opened the door for cannabis research. Over the following decades, researchers including Mechoulam would identify more than 140 active compounds, called cannabinoids, in the cannabis plant, and learn how to make many of them in the lab. Mechoulam helped to figure out that the human body produces its own natural versions of similar chemicals, called endocannabinoids, that can shape our mood and even our personality. And scientists have now made hundreds of novel synthetic cannabinoids, some more potent than anything found in nature. Today, researchers are mining the huge number of known cannabinoids — old and new, found in plants or people, natural and synthetic — for possible pharmaceutical uses. But, at the same time, synthetic cannabinoids have become a hot trend in recreational drugs, with potentially devastating impacts. For most of the synthetic cannabinoids made so far, the adverse effects generally outweigh their medical uses says biologist João Pedro Silva of the University of Porto in Portugal, who studies the toxicology of substance abuse, and coauthored a 2023 assessment of the pros and cons of these drugs in the Annual Review of Pharmacology and Toxicology. But, he adds, that doesn't mean there aren't better things to come. Cannabis's long medical history Cannabis has been used for centuries for all manner of reasons, from squashing anxiety or pain to spurring appetite and salving seizures. In 2018, a cannabis-derived medicine — Epidiolex, consisting of purified CBD — was approved for controlling seizures in some patients. Some people with serious conditions, including schizophrenia, obsessive compulsive disorder, Parkinson's and cancer, self-medicate with cannabis in the belief that it will help them, and Mechoulam sees the promise. "There are a lot of papers on [these] diseases and the effects of cannabis (or individual cannabinoids) on them. Most are positive," he tells Knowable Magazine. That's not to say cannabis use comes with zero risks. Silva points to research suggesting that daily cannabis users have a higher risk of developing psychotic disorders, depending on the potency of the cannabis; one paper showed a 3.2 to 5 times higher risk. Longtime chronic users can develop cannabinoid hyperemesis syndrome, characterized by frequent vomiting. Some public health experts worry about impaired driving, and some recreational forms of cannabis contain contaminants like heavy metals with nasty effects. Finding medical applications for cannabinoids means understanding their pharmacology and balancing their pros and cons. Mechoulam played a role in the early days of research into cannabis's possible clinical uses. Based on anecdotal reports stretching back into ancient times of cannabis helping with seizures, he and his colleagues looked at the effects of THC and CBD on epilepsy. They started in mice and, since CBD showed no toxicity or side effects, moved on to people. In 1980, then at the Hebrew University of Jerusalem, Mechoulam co-published results from a 4.5-month, tiny trial of patients with epilepsy who weren't being helped by current drugs. The results seemed promising: Out of eight people taking CBD, four had almost no attacks throughout the study, and three saw partial improvement. Only one patient wasn't helped at all. "We assumed that these results would be expanded by pharmaceutical companies, but nothing happened for over 30 years," writes Mechoulam in his autobiographical article. It wasn't until 2018 that the US Food and Drug Administration approved Epidiolex for treating epileptic seizures in people with certain rare and severe medical conditions. "Thousands of patients could have been helped over the four decades since our original publication," writes Mechoulam. Drug approval is a necessarily long process, but for cannabis there have been the additional hurdles of legal roadblocks, as well as the difficulty in obtaining patent protections for natural compounds. The latter makes it hard for a pharmaceutical company to financially justify expensive human trials and the lengthy FDA approval process. In the United Nations' 1961 Single Convention on Narcotic Drugs, cannabis was slotted into the most restrictive categories: Schedule I (highly addictive and liable to abuse) and its subgroup, Schedule IV (with limited, if any, medicinal uses). The UN removed cannabis from schedule IV only in December 2020 and, although cannabis has been legalized or decriminalized in several countries and most US states, it remains still (controversially), on both the US' and the UN's Schedule I — the same category as heroin. The US' cannabis research bill, passed into law in December 2022, is expected to help ease some of the issues in working with cannabis and cannabinoids in the lab. To date, the FDA has only licensed a handful of medicinal drugs based on cannabinoids, and so far they're based only on THC and CBD. Alongside Epidiolex, the FDA has approved synthetic THC and a THC-like compound to fight nausea in patients undergoing chemotherapy and weight loss in patients with cancer or AIDS. But there are hints of many other possible uses. The National Institutes of Health registry of clinical trials lists hundreds of efforts underway around the world to study the effect of cannabinoids on autism, sleep, Huntington's Disease, pain management and more. In recent years, says Mechoulam, interest has expanded beyond THC and CBD to other cannabis compounds such as cannabigerol (CBG), which Mechoulam and his colleague Yehiel Gaoni discovered back in 1964. His team has made derivatives of CBG that have anti-inflammatory and pain relief properties in mice (for example, reducing the pain felt in a swollen paw) and can prevent obesity in mice fed high-fat diets. A small clinical trial of the impacts of CBG on attention-deficit hyperactivity disorder is being undertaken this year. Mechoulam says that the methyl ester form of another chemical, cannabidiolic acid, also seems "very promising" — in rats, it can suppress nausea and anxiety and act as an antidepressant in an animal model of the mood disorder. But if the laundry list of possible benefits of all the many cannabinoids is huge, the hard work has not yet been done to prove their utility. "It's been very difficult to try and characterize the effects of all the different ones," says Sam Craft, a psychology PhD student who studies cannabinoids at the University of Bath in the UK. "The science hasn't really caught up with all of this yet." A natural version in our bodies Part of the reason that cannabinoids have such far-reaching effects is because, as Mechoulam helped to discover, they're part of natural human physiology. In 1988, researchers reported the discovery of a cannabinoid receptor in rat brains, CB1 (researchers would later find another, CB2, and map them both throughout the human body). Mechoulam reasoned there wouldn't be such a receptor unless the body was pumping out its own chemicals similar to plant cannabinoids, so he went hunting for them. He would drive to Tel Aviv to buy pig brains being sold for food, he remembers, and bring them back to the lab. He found two molecules with cannabinoid-like activity: anandamide (named after the Sanskrit word ananda for bliss) and 2-AG. These endocannabinoids, as they're termed, can alter our mood and affect our health without us ever going near a joint. Some speculate that endocannabinoids may be responsible, in part, for personality quirks, personality disorders or differences in temperament. Animal and cell studies hint that modulating the endocannabinoid system could have a huge range of possible applications, in everything from obesity and diabetes to neurodegeneration, inflammatory diseases, gastrointestinal and skin issues, pain and cancer. Studies have reported that endocannabinoids or synthetic creations similar to the natural compounds can help mice recover from brain trauma, unblock arteries in rats, fight antibiotic-resistant bacteria in petri dishes and alleviate opiate addiction in rats. But the endocannabinoid system is complicated and not yet well understood; no one has yet administered endocannabinoids to people, leaving what Mechoulam sees as a gaping hole of knowledge, and a huge opportunity. "I believe that we are missing a lot," he says. "This is indeed an underexplored field of research," agrees Silva, and it may one day lead to useful pharmaceuticals. For now, though, most clinical trials are focused on understanding the workings of endocannabinoids and their receptors in our bodies (including how everything from probiotics to yoga affects levels of the chemicals). 'Toxic effects' of synthetics In the wake of the discovery of CB1 and CB2, many researchers focused on designing new synthetic molecules that would bind to these receptors even more strongly than plant cannabinoids do. Pharmaceutical companies have pursued such synthetic cannabinoids for decades, but so far, says Craft, without much success — and some missteps. A drug called Rimonabant, which bound tightly to the CB1 receptor but acted in opposition to CB1's usual effect, was approved in Europe and other nations (but not the US) in the early 2000s to help to diminish appetite and in that way fight obesity. It was withdrawn worldwide in 2008 due to serious psychotic side effects, including provoking depression and suicidal thoughts. Some of the synthetics invented originally by academics and drug companies have wound up in recreational drugs like Spice and K2. Such drugs have boomed and new chemical formulations keep popping up: Since 2008, 224 different ones have been spotted in Europe. These compounds, chemically tweaked to maximize psychoactive effects, can cause everything from headaches and paranoia to heart palpitations, liver failure and death. "They have very toxic effects," says Craft. For now, says Silva, there is scarce evidence that existing synthetic cannabinoids are medicinally useful: As most of the drug candidates worked their way up the pipeline, adverse effects have tended to crop up. Because of that, says Silva, most pharmaceutical efforts to develop synthetic cannabinoids have been discontinued. But that doesn't mean all research has stopped; a synthetic cannabinoid called JWH-133, for example, is being investigated in rodents for its potential to reduce the size of breast cancer tumors. It's possible to make tens of thousands of different chemical modifications to cannabinoids, and so, says Silva, "it is likely that some of these combinations may have therapeutic potential." The endocannabinoid system is so important in the human body that there's plenty of room to explore all kinds of medicinal angles. Mechoulam serves on the advisory board of Israel-based company EPM, for example, which is specifically aimed at developing medicines based on synthetic versions of types of cannabinoid compounds called synthetic cannabinoid acids. With all this work underway on the chemistry of these compounds and their workings within the human body, Mechoulam, now 92, sees a coming explosion in understanding the physiology of the endocannabinoid system. And with that, he says, "I assume that we shall have a lot of new drugs." 10.1146/knowable-013123-1
Nicola Jones is a contributing editor and writer for Knowable Magazine and lives in Pemberton, British Columbia. This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. Read the original here.
Thousands Of Meteorites Hit Earth Each Year — Here's What They Bring
Earth is hit by thousands of meteorites each year, according to a 2020 study published in Geology — but they're small meteorites, not planet-changing asteroids. And with those meteorites come numerous elements that are the key building blocks for life on Earth. Until now, researchers believed that volatile elements like zinc and water might have come from asteroids that formed near Earth. However, a new study published in the journal Science indicates that these volatile elements may have come from asteroids originating closer to Jupiter, Saturn and Uranus.  Though these small rocks have made a long journey through space, they've significantly changed our planet. Here is what meteorites bring to Earth.  What is the Difference Between Asteroids, Meteors and Meteorites? If volatile elements come from asteroids, then why do they also arrive on Earth with meteorites? When it comes to space rocks, there can be some confusion between asteroids, meteoroids, meteors and meteorites — and that's without including comets.  According to NASA, asteroids are airless rocks, smaller than planets but larger than the space rocks that typically make their way through Earth's atmosphere. They are likely the leftover pieces from the formation of our solar system. Asteroids are often found in the asteroid belt — a region between Mars and Jupiter — though some are closer to Earth. Some asteroids are large enough to have their own "satellites" or objects in their orbit.  Read More: 4 Facts About Asteroids You May Not Have Known Once in a while, two asteroids may collide, breaking off a small piece of rock or dust, which sails through our solar system. The broken piece is called a meteoroid.  According to the Jet Propulsion Laboratory of the California Institute of Technology, meteoroids can also be debris from planets and comets.  While meteoroid fragments are small, they can be seen with the naked eye as they turn into meteors that streak across the night sky after entering Earth's atmosphere. When multiple meteors fall to Earth, they form a meteor shower. Meteorites, however, are the space rocks that fall through Earth's atmosphere and actually make it to the ground. And while it may seem that meteor showers produce a lot of meteorites, the opposite is true. Elements Meteorites Bring to Earth Meteorites can be rich in elements such as metals and silicate crystals. According to the Natural History Museum in London, there are three types of meteorites: iron, stony, and stony-iron.  Iron meteorites usually contain iron-nickel metal and trace amounts of sulfide and carbide minerals. Experts believe that these meteorites are part of an asteroid core that melted.  Stony Meteorites contain mostly silicate minerals and are the most common type of meteorite found. Stony-iron meteorites typically consist of equal parts iron-nickel and silicate minerals — which can include semi-precious gemstones like olivine. The Right Element  As the recent study suggests, meteorites may be responsible for bringing volatile elements like water and zinc to our planet from beyond the asteroid belt. Volatiles are "elements or compounds that change from solid or liquid state into vapor at relatively low temperatures," according to a press release.  "This contribution of outer solar system material played a vital role in establishing the Earth's inventory of volatile chemicals," says senior study author Mark Rehkämper in a press release. "It looks as though without the contribution of outer solar system material, the Earth would have a much lower amount of volatiles than we know it today – making it drier and potentially unable to nourish and sustain life." These meteorites contain the six most common elements found in living things: oxygen, carbon, hydrogen, nitrogen, calcium and phosphorus. These six elements, combined with other volatiles like water and zinc, may have helped establish life as we know it.  Even though the elements found on meteorites are also found on Earth, the addition of these elements and compounds from our solar system may have helped make our planet livable. Space rocks continue to help scientists better understand the origins of our solar system and how planets are formed. While these meteorites may not have left large impact craters, they have impacted Earth in other ways. Read More: Here's What Meteors Streaking Toward Earth Look Like From Space
In the Brain, Romantic Love is Basically an Addiction
This article was first published on Feb. 13, 2015. "If at first the idea is not absurd, then there is no hope for it," Albert Einstein reportedly said. I'd like to broaden the definition of addiction — and also retire the scientific idea that all addictions are pathological and harmful. Since the beginning of formal diagnostics more than fifty years ago, the compulsive pursuit of gambling, food, and sex (known as non-substance rewards) have not been regarded as addictions. Only abuse of alcohol, opioids, cocaine, amphetamines, cannabis, heroin and nicotine have been formally regarded as addictions. This categorization rests largely on the fact that substances activate basic "reward pathways" in the brain associated with craving and obsession and produce pathological behaviors. Psychiatrists work within this world of psychopathology — that which is abnormal and makes you ill. Is Love Addiction Real? As an anthropologist, I think they're limited by this view. Scientists have now shown that food, sex, and gambling compulsions employ many of the same brain pathways activated by substance abuse. Indeed, the 2013 edition of the Diagnostic and Statistical Manual of Mental Disorders (the DSM) has finally acknowledged that at least one form of non-substance abuse — gambling — can be regarded as an addiction. The abuse of sex and food have not yet been included. Neither has romantic love. Read More: Why Are We Addicted to Love? I shall propose that love addiction is just as real as any other addiction, in terms of its behavior patterns and brain mechanisms. Moreover, it's often a positive addiction. Scientists and laymen have long regarded romantic love as part of the supernatural, or as a social invention of the troubadours in 12th-century France. Evidence does not support these notions. Love songs, poems, stories, operas, ballets, novels, myths and legends, love magic, love charms, love suicides and homicides — evidence of romantic love has now been found in more than 200 societies ranging over thousands of years. Around the world, men and women pine for love, live for love, kill for love, and die for love. Human romantic love, also known as passionate love or "being in love," is regularly regarded as a human universal. What Are the Symptoms of Love Addiction? Moreover, love-besotted men and women show all the basic symptoms of addiction. Foremost, the lover is stiletto-focused on his/her drug of choice, the love object. The lover thinks obsessively about him or her (intrusive thinking), and often compulsively calls, writes, or stays in touch. Paramount in this experience is intense motivation to win one's sweetheart, not unlike the substance abuser fixated on the drug. Impassioned lovers distort reality, change their priorities and daily habits to accommodate the beloved, experience personality changes (affect disturbance), and sometimes do inappropriate or risky things to impress this special other. Many are willing to sacrifice, even die for, "him" or "her." The lover craves emotional and physical union with the beloved (dependence). And like addicts who suffer when they can't get their drug, the lover suffers when apart from the beloved (separation anxiety). Adversity and social barriers even heighten this longing (frustration attraction). In fact, besotted lovers express all four of the basic traits of addiction: craving, tolerance, withdrawal, and relapse. They feel a "rush" of exhilaration when they're with their beloved (intoxication). As their tolerance builds, they seek to interact with the beloved more and more (intensification). If the love object breaks off the relationship, the lover experiences signs of drug withdrawal, including protest, crying spells, lethargy, anxiety, insomnia or hypersomnia, loss of appetite or binge eating, irritability, and loneliness. Lovers, like addicts, also often go to extremes, sometimes doing degrading or physically dangerous things to win back the beloved. And lovers relapse the way drug addicts do. Long after the relationship is over, events, people, places, songs, or other external cues associated with their abandoning sweetheart can trigger memories and renewed craving. Love on the Mind Of the many indications that romantic love is an addiction, however, perhaps none is more convincing than the growing data from neuroscience. Using fMRI, several scientists have now shown that feelings of intense romantic love engage regions of the brain's "reward system": specifically, dopamine pathways associated with energy, focus, motivation, ecstasy, despair, and craving, including primary regions associated with substance (and non-substance) addictions. In fact, I and my colleagues Lucy Brown, Art Aron, and Bianca Acevedo have found activity in the nucleus accumbens — the core brain factory associated with all addictions — in rejected lovers. Moreover, some of our newest results suggest correlations between activities of the nucleus accumbens and feelings of romantic love addiction among people who are wildly, happily in love. Nobel laureate Eric Kandel has noted that brain studies "will give us new insights into who we are as human beings." Knowing what we now know about the brain, my brain-scanning partner Lucy Brown has suggested that romantic love is a natural addiction, and I've maintained that this natural addiction evolved from mammalian antecedents some 4.4 million years ago among our first hominid ancestors, in conjunction with the evolution of (serial, social) monogamy — a hallmark of humankind. Its purpose: to motivate our forebears to focus their mating time and metabolic energy on a single partner at a time, thus initiating the formation of a pair-bond to rear their young (at least through infancy) together as a team. The sooner we embrace what brain science is telling us — and use this information to upgrade the concept of addiction — the better we'll understand ourselves and the billions of others on this planet who revel in the ecstasy and struggle with the sorrow of this profoundly powerful, natural, often positive addiction: romantic love. Excerpted from This Idea Must Die, edited by John Brockman. Used with permission.
The Best Times of the Day to Eat, According to Science
You've long heard that eating your biggest meal in the morning and your smallest meal at night is the best way to stay slim and trim. But what's the truth? What does science say about optimal eating times for keeping the weight off and staving off cardiovascular disease, diabetes and a host of other chronic illnesses? Experts contend that while the quality of the food you eat is most important, the timing is a close second. Research shows that when people ate the same amount of calories each day but ate most of them in the morning and at lunch, they lost more weight than people who ate most of their calories in the evening, says Courtney Peterson, an associate professor of nutrition at the University of Alabama at Birmingham. She says that they were also less hungry throughout the day. "It's a one-two punch because those who ate the most calories at night lost less weight but were also more hungry," says Peterson. What's Happening in the Body? According to experts, it's all about how your body reacts to hunger. "Subjective hunger," or how hungry you claim to be, is higher later in the day as are levels of the hunger hormone ghrelin, says Peterson. This is the hormone that signals to your brain that you're hungry and it's time to eat. Read More: The Science Behind Why We Get Hangry You also burn fewer calories when you eat later in the day because of what's called the "thermic effect." This is the number of calories required for your body to digest, absorb and metabolize food. "Genes that are involved in burning and storing fat seem to perform worse later in the day and tend to activate pathways in the brain that store fat more easily," says Peterson. But it's not just that weight loss is dependent on the time of day that you eat. Eating earlier in the day may also be beneficial for overall health, for example, keeping blood sugar in check as well as lowering blood pressure and improving thyroid health. A July 2021 study published in the journal Nutrients found that eating an earlier dinner improved blood glucose levels. Additionally, a November 2021 study published in the journal Epidemiology and Health found that eating later in the day increased cardiovascular risk factors like blood triglyceride concentration. All of these mechanisms depend on the time of day because of the body's circadian system or internal biological clock. The circadian system is also the reason why your best sports performance is in the afternoon, and you're better at falling asleep at night. Read More: How Your Circadian Rhythms Control Your Every Waking — and Sleeping — Moment "Your body is optimized to perform certain tasks at certain times of the day, which makes sense from an evolutionary perspective," says Peterson. Millions of years ago, the sun produced most of the light which early humans depended on to live, and therefore, we were much less active at night.  When Should You Be Eating?  Dietitian and author Carolyn Williams says that it's also important to note that our body needs to have set times when we eat and when we don't eat. The body is set up hormonally and metabolically to require at least a 12-hour window when we aren't taking in food, says Williams. That's how we lived up until the 1970s when snack foods and the 24-hour availability of food became a mainstay. The body is better able to shed pounds when there's at least a 12-16 hour window when we're not eating. "Prior to a half-century ago, most foods were eaten at home, there wasn't nearly as much snacking and the kitchen was closed for the night after dinner time," says Williams. Read More: We've All Heard That Eating Late Is Unhealthy. Is It True? If you tend to eat dinner later at night, it's still important to fast for 12-16 hours. That means if you eat your dinner at 9 p.m., then you shouldn't eat again until between 9 a.m. and 1 p.m. the next day. Just because you eat dinner late at night, it doesn't mean you're doomed. But, it is worth making your last meal of the day the smallest of the three. And most importantly, after dinner each evening, the kitchen should be closed for business until it's time for breakfast the next morning.
How These 4 Animals Can Regenerate and Why Humans Can't
It's been long known that arthropods, meaning all animals with articulated limbs and bodies with segments, can rebuild legs and arms after a loss, according to Gerhard Scholtz, a comparative zoologist at Humboldt University Berlin. For instance, when crustaceans are attacked they can even break off their injured leg themselves, and sacrifice it to survive. Now, the eight-legged sea spider Pycnogonum litorale — think of a marine creature related to terrestrial arachnids like spiders, scorpions and mites — has shown that anthropods can recover entire parts of their body, too. In a recent study published in Proceedings of the National Academy of Sciences, Scholtz's team ran experiments amputating various body parts of 23 specimens of sea spiders. Much to the scientists' surprise, the majority of the young and adolescent spiders could carry out near-complete regeneration of all of their missing body parts. "To regenerate things along the body axis, this was unknown, and there was kind of like a dogma saying anthropods weren't able to do that," says Scholtz. "But we have shown that it is possible. It was unexpected. It was a big surprise." Exactly how sea spiders are pulling off this feat is still unclear, though. In fact, a growing body of research into the molecular mechanisms behind regeneration seems to suggest that there's no one-size-fits-all come-back. 1.   Planarian and the Stem Cell Method Take the flatworm, for example, also known as Planarian and one of the most impressive examples of regeneration in the animal kingdom. These aquatic worms are invertebrates, and can completely regenerate their entire bodies even after losing up to 90 percent of themselves. If they're beheaded, they can even grow their head back. "Those guys actually use a 'stem cell mediated method' of regenerating," says Catherine McCusker, a professor of molecular mechanisms of regeneration at University of Massachusetts, Boston. "They have a pluripotent population of stem cells that just hangs out in the body at all [times] and is sporadically replacing damaged cells. When a big amputation happens, those cells are essentially called upon to regenerate the missing structure, no matter what it is." Marine animals called sea squirts use this same technique, too. 2.   Axolotl and the Dedifferentiation Method Whilst the Planarian is impressive in its regenerative abilities, the real MVPs of regeneration seem to be the Axolotl — the adorable Mexican water salamander. It is the only vertebrate that can regenerate various of its body parts no matter how old it is. It can replace entire missing limbs, its tail, its testes, its internal organs like the gut and heart, its spinal cord and even its neurons and part of its brain. The Axolotl doesn't tap into its stem cell population though, instead, it uses a technique known as dedifferentiation. Once they're injured, they grow a stub called the blastema from nearby undifferentiated cells. Read More: What the Axolotl's Limb-Regenerating Capabilities Have to Teach Us "What they do is they essentially turn the clock back in these old cells in their body to start to behave like embryonic cells, right, but they aren't stem cells," says McCusker. "They're kind of somewhere in between a stem cell and like an adult cell, so they're not differentiated, but they know what they're going to be." This is called epimorphic regeneration and it's a technique of choice for many other animals who have the ability to regenerate. Terrestrial lizards and salamanders also use this technique. The starfish does too, and in some cases it can grow an entirely new body from just a single arm. 3.   Hydra is Reshuffling, aka Morphallaxis The hydra is a freshwater jellyfish-like organism that likes to stick to rocks and looks somewhat like an anemone — they're real jacks of all trades. In most cases, they undergo a process that's known as morphallaxis. "Essentially, what that does is that they take whatever's remaining in the tissue, and they just shuffle the cells around, reorganize them, so that makes a perfectly formed mini-version with all of the appropriate structures," says McCusker. But, they can also do a combo. "Depending on how they're injured can flip the mode of how they regenerate," says McCusker. If they're injured more intensely, they'll also tap into the same process that the Axolotl does, with a new pool of cells growing to replace the missing structure through cell proliferation and dedifferentiation. 4.   Zebrafish Like Combos, Too Zebrafish can regenerate, even in their adult age, everything from fins to spinal cord, retina, heart, kidneys and the most highly-developed front part of the brain, the telencephalon — but they like combos too, because the mechanisms that allow for regeneration seem to be organ-specific. Fin regeneration looks similar to the Axolotl or the starfish. But regeneration of the telencephalon calls on stem cells to save the day, just like the flatworm. Why Do Humans Suck at Regenerating? Why is it that these animals can regenerate? And animals, like us and other mammals, are lousy at regenerating? That's still a puzzling question today, according to Andrey Elchaninov, the head of the Laboratory of Regenerative Medicine at the Vladimir Kulakov National Medical Research Center. To this day, thereare various clashing hypothesis, and the scientific jury is still out. Elchaninov's favorite theory is one tied to the evolution of our immune system. "If immunity is very high, like in mammals or birds, these species cannot regenerate legs, fingers and so on. Why is that?" Elchaninov says. Maybe it's because the immune system wants to prevent tumors, and the molecular mechanisms for regeneration are similar to that of tumor formation — for example, using stem cells. "So, evolution chose 'Okay, these species will be less likely to have tumors, but they will not regenerate,'" says Elchaninov. This theory is supported by research on the African spiny mouse, a type of mouse that can regenerate their skin and fur after an injury. Studies show they don't seem to have any macrophages, a type of immune cell, on the skin they regenerate. "There are no macrophages in the trauma skin. That's why I think is there is some connection between immunity and regeneration," Elchaninov says. Progress in research on exactly how and why some animals can regenerate and others cannot, will shine a light on whether humans could ever tap into some of these abilities. This is of specific interest for doctors, scientists and professionals working in the field of regenerative medicine. "For example, humans cannot regenerate fingers or legs, but in prenatal development we have all genes that contributed to leg-growth or finger-growth, and these are actually the same genes found in starfish and Hydra," says Elchaninov. "Maybe there will be a way to 'wake up' these genes also in postnatal development, and regenerate limbs." "But this would be in the future," Elchaninov says. "Far, far in the future, in my opinion."
Behold the future!
Is this article about Robotics?

Boston dynamics humanoid robot + Chat GPT 4.0 + Mercedes-Benz ai car data (level 3) = A robot that can walk and navigate and chat with you.

All the data auto-driving cars have gathered would be very useful in a robot trying to navigate the world.

submitted by /u/CollateralJustice
[link] [comments]
On the shores of Lake Victoria in Kenya, a short valley extends south towards the looming Mount Homa. From it have emerged some of the oldest-known stone tools used to butcher large animals, as well as the oldest remains of one of our early cousins, Paranthropus—a genus we think co-existed with our direct ancestors.

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36401-7

Lithium metal batteries (LMBs) with inorganic solid-state electrolytes suffer from lithium dendrites propagation. Here, the authors demonstrate the production of stable lab-scale LMBs using an Ag-coated Li6.4La3Zr1.7Ta0.3O12 inorganic solid electrolyte in combination with a silver-carbon interlayer.
Society Tells Me to Celebrate My Disability. What If I Don't Want To?

My memory of the moment, almost a decade ago, is indelible: the sight of a swimmer's back, both sides equal—each as good and righteous as the other. An ordinary thing, and something I had never had, and still don't have. To think of that moment is to feel torn—once again—about how I should respond to my condition: whether to own it, which would be the brave response, as well as the proper one, in many people's eyes; or to regret it, even try to conceal it, which is my natural response.

I have a form of cerebral palsy called hemiplegia, which affects one side of the body. The word has two parts: hemi, meaning "half," and plegia, connoting stroke or paralysis. I have had a "half stroke," but I prefer the romance of my high-school Greek teacher's translation: I was, as he put it, struck on one side. Plus, it's a more accurate description of what happened to me. At birth, the forceps used to pull me out of the womb pierced my baby-soft skull and damaged my cerebral motor cortex. On my left temple is a tiny scar left by the forceps and shaped, rather unfortunately, I've always thought, like an upside-down cross—the anti-Christ symbol.

I look, I'm told, basically normal. I am not in a wheelchair. I have good control of my limbs. I write and I paint. I can do most everyday tasks. Although my symptoms are typical—­muscular tightness, limited movement ability, poor muscle development—­they are mild. For this reason, everyone calls me lucky. And it's true—compared with other kids in the waiting room of the cerebral-palsy ward, I was lucky, extremely lucky. But still, I never asked to be in that waiting room. I did not look like those kids inside the hospital—would balk at being classed with them, even—but my body didn't fit in outside the hospital either. Doctors, friends, parents—a platoon of people who have never experienced what I have—commend me on my normalness. This always makes me feel accomplished, until I realize that what they really mean is: Normal, considering …

When I was a child, my symptoms were more pronounced than they are now. I simplified my deformities: I had a Good Side and a Bad Side, even telling kids at primary school that half my penis didn't work (I had to have some fun). My Good Side, my left, was my superhero; I was actually right-handed, but taught myself to use the superhero side. My Bad Side, my right, was a cave-dwelling creature, a Caliban, a spindly, weak, shameful thing that I'd hit with my left hand when I was angry. I used to scream at my mother, crying, You did this. You gave birth to this.

I had a noticeable limp. My right heel couldn't get to the floor, which left me on perpetual tiptoe. Unless my foot was strapped into a splint, my ankle couldn't reach 90 degrees—the doctors' acid test of normality. I needed shoes of two different sizes to allow for the added width of my daytime splint. My mother would explain the situation to shop assistants as I sat on the little sofa waiting for my mismatched shoes to arrive. Their faces turned to pity, or something like disgust. Did they think I was contagious? My nighttime splint had no give whatsoever. When I'd get up to pee in the night, waddling along in the strange walk that the splint forced on me, I'd pass my bathroom mirror and stare. Despite the crocodile pattern the nurse had let me choose, it all looked so medical, so unnatural—­so, well, disabled. And I would think, I am not this.


[Read: A disability film unlike any other]

As if to make it official, my doctor said, "You do not have motor skills." I've never been able to move just one finger on my right hand, for example. If one finger is moving, they're all dancing some uncoordinated dance. I needed help in class. I found it tricky to cut and paste, to organize myself, or to write for long periods of time, because my hand would cramp. It was humiliating enough to have a personal classroom assistant, but the assistant, Yulia, also had to massage my foot each morning to relax my muscles. She wasn't popular with the other kids at school. Her foreign accent, tough manner, and short haircut made her a prime target for crude, all-boys-school-style ridicule. I often found it easier to join in than to defend her. I wanted everyone to think I didn't need her. She never cared about the other kids being rude. But if she overheard me, she'd look at me with eyes that made it clear I was betraying her.

I would meet her in the black box of my primary-school drama studio half an hour before classes began. I'd take off my shoe, splint, and sock. She'd squeeze Johnson's Baby Oil onto her hands and then take my foot roughly—kneading and pushing and pulling it. I would apologize again and again in my head. I'm sorry you have to do this. I'm sorry I'm like this.

Sometimes another kid would walk in. My body would revolt in panic—­I'd squirm away from Yulia, desperately ashamed of the vision of my naked foot and ankle, moist with oil, poking out of my trouser leg. Something haunted me about the fleshy color of my skin with nowhere to hide in that black, black room. I'd pull my sock back on as quickly as humanly possible and sit there, staring at the floor, until Yulia firmly asked him to leave. When he'd gone, she'd reach an arm out, indicating that I should take my sock off once more.

At age 12, I beat my lifelong best friend—a boy I'd been in diapers with—in a tennis match at his grandfather's house. He didn't like losing, and he screamed from the baseline, "You disabled cunt." I ran inside. In the kitchen, sobbing, I bumped into his grandfather and his mother—incidentally, my mother's best friend—who asked what was wrong. I began to tell her, a woman I'd known all my life, a woman who'd known about my disability before I could even speak, and she lifted a finger in the air and said, "Ah. Don't mention names. No one likes snitches." I turned to his grandfather, hopeful, but he simply said, "No one said that to you, Emil." I expected kids to be nasty, but had thought adults grew out of it.

As I prepared to leave primary school, I was also preparing for an operation on my Achilles tendon, which would mitigate my limp. The operation was scheduled for the final day of the school year, and so while every other boy in my class piled into a bus headed for a theme park to go on rides with names like Stealth and Nemesis Inferno, I was driven to a hospital in the suburbs of London. My mother spent the day reminding me that I'd never liked roller coasters anyway. I was given a wheelchair until I could walk again, but after one day of being eyed by strangers, I opted for crutches. I longed to hold a sign that read THIS CHAIR IS TEMPORARY. I AM LIKE YOU. My cast eventually came off, my heel now reached the ground, and my strange, clodhopping gait was gone.

collaged image of different overlapping human figures
Emil Sands, 2022. Three Figures. Oil on paper, collage.

I moved on to secondary school. No more splints, no more personal assistants, no more massages, no more limp. My parents assured me: Normal starts now. But that was not true. I was hit with a new regime—a twice-daily therapy program of swimming, stretching, and working with weights.

Each morning, I arrived in the funky-smelling changing room of my all-boys school sometime between 7:15 and 7:30. I found a space on the bench and a corresponding peg that wasn't already littered with the chucked-off black-and-maroon ties, white shirts, trousers, sports bags, and boxers of the swim squad, which got there before me. In order to minimize my time spent naked, I was already wearing my regulation Speedo trunks under my uniform. I took off my own tie, shirt, and trousers and dumped them in my black-and-blue Sports Direct bag, which I carefully hung up.

Looking down at my nearly naked body, I longed for a different one. Something about puberty had made me fat, like a baby: My stomach ballooned out so that I could only just find the tips of my toes beyond it. My Good Side looked exactly that—good. But my Bad Side remained a perpetual disappointment. The swimming was meant to mitigate the effects of my disability, but swimming was the last thing I wanted to do.

[Read: Doctors are failing patients with disabilities]

The changing room connected directly to the pool, and the stench of chlorine was unavoidable. With nowhere else to go in this windowless part of the gym complex, it found your nose and clogged it. From my seat in the changing room, I could hear the swim squad, which had already been training for 40 minutes—the reverberating splashes, the critical shouts, the coach's whistle. Their sonic booms stretched up past the viewing gallery to the ceiling and crashed back down again, echoing off the water.

I made my way through the corridor to the pool, holding my arms around my tummy. A mass of indistinguishable squad muscle—­here a lean leg, there a powerful arm, there a goggled head on a bull-muscled neck—filled four of the pool's five lanes. I approached the fifth—the teachers' lane—and reluctantly lowered myself in. This was the only place where the school and swim coach could think to put me. My elderly French teacher was usually in there already, breast-stroking at the same pace his lessons went. Of everyone in this pool, it was his team I was somehow put on.

Even underwater, I attempted to cover my wibbling fat, knowing that the squad's goggles allowed for plain viewing of my body. As I went up and down the pool, doing my customary half-swim, half-walk, their thoughts consumed me. Did they know why I was in their pool? Had their coach told them? Did they care? Scarier still, were they so passionate about their sport that they didn't even notice me?

After swimming, they filed back into the changing room. They were teammates: not exactly friends, but they shared a closeness. They laughed about races won and lost. They stretched out, leaned over, bent down. Like ancient Greeks in the gymnasium, they had bodies that were a total luxury. I showered in my trunks after them, then hurried to a private cubicle to change into my underwear, all the time careful to avoid the mirrors that lined the walls. I covered my body with towels, hands, arms, anything at all so that no one, myself included, could see it in its entirety.

When one of the swimmers was dressed and ready to leave, the others shouted a goodbye and nodded, lifting their head and their eyebrows together in a way that encompassed the entirety of masculine prowess. But not once in all the years I changed with them did any of the swimmers look my way.

Well, there was one time, actually. Marcus was a boy, two or three years ahead of me, whom everyone either knew or knew of. He was, as far as I could tell, everything anyone could ever want to be. We never spoke—why on earth would we?—but so powerful was his physical presence that I became acutely aware of my lumbering body if he so much as walked past me in the school corridor. He seemed to be taller than anyone else in his year, although that probably wasn't the case. He was always greeting people, stretching out an arm and a hand for some über-cool, effortless handshake.

The incident occurred when I was 15 or 16. I came out of the pool late, and only Marcus and a friend of his were still getting changed. By this point, my body had morphed slightly. I still felt overweight and cumbersome, and my disability still left half of my body lacking, but the past three years of training had at least made me look more like others my age. After showering, I went back to my bag and began getting dressed.

Marcus was in his underwear with his back facing me. I don't know quite what happened that day, but some deep-set mixture of jealousy, longing, and desire prevented me from looking away. His back was the mightiest thing I'd ever seen. Everywhere you looked it was packed with muscles. And the symmetry! He turned and Achilles was standing there in the locker room. I traced every contour, every ebb of his body, with my eyes, inventorying every part of him that I was not.

I came to, and realized that both Marcus and his friend were standing there, watching me staring at him. There were codes, and I, a locker-room weirdo, had just broken them.

"Dude," said the friend to Marcus, cutting the silence with a cruel splutter of laughter, "I think someone likes what he sees."

Marcus started laughing and mock-­provocatively tensed his body in my direction. "You want a piece of me, Sands?"

And while I did a double take—had he just said my name?—I understood how far away from these boys I was. How, if I answered his question honestly, the truth would be out: No, I don't want a piece of you. I want all of you. I want to have what you have.

I said nothing. I backed away into a bathroom stall. I didn't come out again until they had left.

I stopped swimming a few months after this, defying my parents, my school, and the medical committee that oversaw my rehabilitation. I had developed psychosomatic symptoms that made it unfeasible for me to carry on. At around the same time in the morning as I would start my swim, I would begin to hear a chorus of voices in my head. They screamed at me in a dark gibberish. Although it wasn't English, I knew what they were telling me: I was worthless, useless. I would stop mid-stroke and hold my hands to my ears, trying to make them stop. At first, I thought the water had made my ears go funny. But the voices grew louder, darker, and more overwhelming. There were more hospital appointments. More concerned doctors. A specialist wondered if we knew the word schizophrenia.

When I stopped swimming, the voices stopped too, suggesting that the episodes were a result of some severe anxiety connected with the pool. As a deal, I swapped my five swims a week for more time in the gym and more stretching. I preferred this. For one, I could be clothed. But more than that, I could work toward goals that were less about competition and more about personal growth: getting big arms or a six-pack, having a meal plan based on eating lots of proteins. Things that most boys my age wanted.

As I understand now, my disability pushed me harder. Closed doors draw attention to open ones. When I was in my early teens, I competed for my school's annual reading prize: First place went to the student who was best at delivering a poem or short story aloud. I got through the heats easily. Backstage, at the final, I watched as others nervously ambled about, familiarizing themselves with the Keats or Kipling poems that their parents had perhaps helped them pick out for this round. One by one, they were called up, until eventually it was my turn. I took to the podium. I opened my book. I began with the first line of the first chapter: "In Which We Are Introduced to Winnie-the-Pooh and Some Bees, and the Stories Begin." It is the chapter with the line "Then he climbed a little further … and a little further … and then just a little further."

And I won. It didn't bother me at all that no one else was particularly interested in winning this made-up prize. What mattered to me was that I'd won it on my own, reading something I loved, words of my choosing. I remember feeling at the time, as silly as it sounds, that somehow, by reading a children's book when everyone else was pretending to be an adult, I'd beaten the system. What system that was, I still don't know—this was just a diction competition for adolescents at a private school. But I held the feeling close.

detail of oil painting of face with dark hair and eyes
Emil Sands, 2021. Self-Portrait. Oil on plastic.

There were few physical activities I actually could not attempt, but many I could not do well. I am thinking, in particular, of football—­soccer. I tried to play when I was very young. Had I persevered, the necessity of using both legs would have proved helpful in rehabilitating my right side. But a concrete block descended if a ball was ever brought out at a friend's house or while on holiday. If a stray ball came off someone's foot in a park and I was expected to kick it back, I froze. I could not play. I did not play. I refused to play.

There was a power in saying no, but saying no also left me out. Every day at school, a lunchtime soccer game stretched across the fields outside. I took a different door—I began to go to the empty art studios. The studios were adjacent to the fields, and from my easel, I could see the game. Muffled shouts came my way. At a certain point, however, I began to look forward to my solitary lunchtime activity. The prospect of making new work and concentrating on something that mattered to me felt important. I started to think about going to art school and used the extra hour a day to create a portfolio.

As we reached the final year or two of school, the studios began to fill up a little. Two younger boys began editing their street photography in the computer suite. An art teacher inspired a group of classmates to come in every day and try screen printing. Although my school was only for boys in the earlier grades, it was coed in the final two years, and girls and boys could work in the studios together. My friend Sarah often sat across from me, drawing tiny floral patterns that, by the end of lunch, had ballooned out to fill the page. In the studios, on busy days, you couldn't hear the game outside at all.

Today, hardly anyone knows I am disabled. I tell no one, because I believe people will like me less. Maybe just for a split second. Maybe for longer. Or maybe I should rephrase: I believe people will like me more if they think I am like them. So I go out of my way to keep my disability private. When I am tired, a residue of my old limp returns. On the few, but truly excruciating, days that someone notices and asks if I have hurt my leg, I lie and say I twisted my ankle. Oh shit, how? And, demoralizing as it may be, I keep going—­on the stairs; last week in the shop; literally just before I saw you. On the rare occasions when I don't lie, I always wish that I had. Wait, what? You're disabled? The chasm opens again.

[Read: On disability and accepting help]

I go to the gym every day of the week. No one makes me do it—not because my cerebral palsy is gone, but because I am an adult. My body is a "good" body: It is strong, muscular in places, and tight-ish. It's not Marcus's, but I am not Marcus. In the gym, I am recognized, and men I've never spoken to nod their head my way.

Nevertheless, I am wary. Do they see that my right side is less muscular than my left? That I sometimes have trouble picking up the weights in a coordinated fashion? That, when I'm fatigued, I drop them just outside the little ridges I'm meant to leave them in? Do they think I'm weak because the weight I lift is low, to make up for my right side's deficiency? I want to tell them that all of these things are not my fault, but the fault of a rogue forceps blade 23 years ago. I want to show them my medical records, drag them to my gym bench, and point out everything that's wrong with my form, or my body, or my brain, because then I could stop second-guessing. I could own my condition. But I am not Achilles.

When my dad first overheard me lie about my limp, he was astonished. Within the family, my disability has become an easy, even joked-about, topic. We had a follow-up conversation in which he asked me why I had done that. Exasperated and embarrassed, I pretty much told him to back off. He did, but his eyes said enough: This is not the son I raised. And he was right. I know more than most that difference must be celebrated, and that each time I hide, the shame builds—for me, for others like me. Somehow, I have become the bully, or at least the bully's accomplice.

I am not sure I want to hide anymore. I'd rather embrace my disability than fear its fallout. But it would be a lie to say I love every part of my body. I am still grappling with the ways I have been made to feel that my body does not belong—and with the conviction that it is easier for everyone that I be a failing normal rather than a normal disabled.

This article appears in the March 2023 print edition with the headline "Struck on One Side."

Asymmetric eROSITA bubbles as the evidence of a circumgalactic medium wind

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36478-0

The origins of the pair of X-ray bubbles, called 
 bubbles (eRBs), detected in the halo of Milky Way are debated. Here, the authors show hydrodynamical simulations suggesting circumgalactic medium wind model can explain asymmetric eRBs.
⭕ New Open-Source Version Of ChatGPT

GPT is getting competition from open-source.

A group of researchers, around the YouTuber Yannic Kilcher, have announced that they are working on Open Assistant. The goal is to produce a chat-based language model that is much smaller than GPT-3 while maintaining similar performance.

If you want to support them, they are crowd-sourcing training data here.

What Does This Mean?

Current language models are too big.

They require millions of dollars of hardware to train and use. Hence, access to this technology is limited to big organizations. Smaller firms and universities are effectively shut out from the developments.

Shrinking and open-sourcing models will facilitate academic research and niche applications.

Projects such as Open Assistant will help to make language models a commodity. Lowering the barrier to entry will increase access and accelerate innovation.

What an exciting time to be alive!

Thank you for reading! I really enjoyed making this for you!
The Decoding ⭕ is a thoughtful weekly 5-minute email that keeps you in the loop about machine research and the data economy. Click here to sign up!

submitted by /u/LesleyFair
[link] [comments]
Beach erosion: Satellites reveal how climate cycles impact coastlines
Researchers from UNSW Sydney have analyzed millions of satellite photos to observe changes in beaches across the Pacific Ocean. The findings, published in Nature Geoscience today (Feb. 10), reveal for the first time how coastlines respond to different phases of the El-Niño-Southern Oscillation (ENSO) cycle.
Dowsing for facts: can a sceptic find science in water witchery?

Used by water companies but debunked by science, crossing rods in Wiltshire has this writer intrigued

Nestling in the shadow of a white horse and a Neolithic long barrow, in a renowned crop circle hotspot, Alton Priors, in Wiltshire, feels like the perfect venue for a spot of water witchery. Prompted by the news that Thames Water and Severn Trent Water use dowsing rods to detect water leaks, I've arranged to meet my mum – a geologist and amateur dowser – to investigate the phenomenon for myself.

There are other reasons for picking this particular location. Geologically speaking, Alton Priors lies on the boundary between a chalk escarpment and sandstone, the latter underlain by clay, which means there are numerous springs gushing out of the ground. The local churchyard is also where an acquaintance of my mum once suggested she try dowsing, because "he just had a sense it would work there". Sure enough, her rods crossed.

Continue reading…

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-35918-1

In this work, the authors identify regulators of actin filament assembly involved in chiral organisation of the actin cytoskeleton in single cells and chiral alignment of cells in groups. This provides insights into how actin-driven chirality underlies tissue and organ asymmetry.
Pt-induced atomic-level tailoring towards paracrystalline high-entropy alloy

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36423-1

Paracrystalline state is still challenging to reach in alloy systems in a controlled manner. Here, the authors present an atomic-level tailoring route to create paracrystalline Zr-Nb-Hf-Ta-Mo high-entropy alloy through local amorphization induced by atomic-level Pt with negative mixing enthalpy.

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36550-9

Delivery of immune therapy drugs to tumours might be hampered by their limited bioavailability and the difficulty of targeting complex exogenous compounds. Here authors trigger immunologic cell death, via activating tumour-cell-intrinsic pathways via CRISPR-based nanotechnology to enable efficient anti-tumour immune response in mouse models of melanoma.
A latitudinal gradient of deep-sea invasions for marine fishes

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36501-4

This study finds that high-latitude fish clades with the fastest speciation rates also exhibit elevated rates of depth evolution, creating a prevailing latitudinal gradient of deep-sea invasions concentrated in poleward regions. These results advance our understanding of how niche lability and climate shape global patterns of species distributions.

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36321-6

Glioblastoma is a highly aggressive, and also the most common, brain tumour type in adults. Here, the authors generate a nanoparticle encapsulating the 
/8 agonist, R848, which induces tumour regression in mice by reprogramming myeloid cells independently of T and NK cells.

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36402-6

Here the authors show how the MTREC core protein Red1 binds to and sequesters Pla1 from the 3'-end processing machinery to hyperadenylate cryptic unstable transcripts and target them to the exosome for efficient degradation.

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36488-y

Quantum anomalous Hall junctions show great promise for advancing next-generation electronic circuits. Here, the authors demonstrate a scalable method for synthesizing heterostructures of magnetic topological insulators with regions of distinct Chern numbers and characterize the 
 interface modes that emerge at the interface.
Histone variant H2A.Z modulates nucleosome dynamics to promote DNA accessibility

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36465-5

Here the authors show that H2A.Z histone variant incorporation reduces the nucleosomal barrier for transcription. Furthermore their simulations reveal that H2A.Z facilitates spontaneous DNA unwrapping from the histone octamer and enhances nucleosome gaping.

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36367-6

 variants are associated with X-linked hypohidrotic dysplasia. Here, the authors report the crystal structure of the human EDA-EDAR complex, reveal the important role of this complex in ectodermal development and uncover the structural mechanism of disease-related mutations in EDA.
Evolution of protease activation and specificity via alpha-2-macroglobulin-mediated covalent capture

Nature Communications, Published online: 11 February 2023; doi:10.1038/s41467-023-36099-7

Custom proteases find applications as therapeutics, in research and in biotechnological applications. Here, the authors establish a protease selection system based on bacterial alpha-2-macroglobulin protease inhibitors and evolve staphylococcal proteases for increased activity and altered specificity.

Leave a Reply