

The "absolutely monstrous" cosmic blast is estimated to be a 1-in-10,000-year event

TODAY
2045 has been put out as a possible date but recent advances in ai has me thinking it could potentially happen much sooner
[link] [comments]
We worked alongside the CAS folks to contribute to this textbook, Modern Meat. All credit to Kris and his team over at CAS for this multi-year effort. If you're thinking about getting into the space or reading from folks doing the work, this is a great place to start.
[link] [comments]
As a translator by trade who since moved on to greener pastures, I feel like I've seen the developments regarding generative AI before. Something very similiar happened a few years back, when neural networks lead to a big jump in the quality of machine translation output. Jobs in the translation industry have not been the same since, although the downward trend actually started a bit earlier than that.
I think it all began with the introduction of CAT (computer-aided translation) tools in the early 90s. These dissect texts into small chunks, often on the sentence level, and save them together with their translations in a database. If a similar segment shows up in a later text, the software fetches the previous translation and suggests using all or part of it, potentially saving the translator time and increasing their productivity.
Translators could now translate more text in less time, and for freelancers, this could also translate (ha) into higher income. But big translation agencies had something different in mind: They would use the productivity boost to lower their prices and undercut the competition in the hopes of attracting more customers.
Obviously, competing companies would do the same, so the rates translators could realistically ask for entered a downward spiral. When neural networks and translation tools like DeepL arrived, there already wasn't much left to disrupt.
A translator's income is now laughably low even in my home country Germany, where the profession has traditionally been highly regarded. Your only chance at making a decent living in the industry is to be a very skilled freelancer who offers additional services that are not as easily automated or if you start your own translation agency and pay other translators pennies instead.
Most employed translators now work as low-paid project managers, coordinating the translation process between the clients and a pool of freelancers instead of translating anything themselves. Those who actually do translate texts often have to pre-translate them using DeepL or similar tools and then try to salvage the results.
The combined household income of two people working in the translation industry often won't even get them into the middle class. Instead of increasing prosperity, technological progress destroyed it.
I think something similar will happen to other industries due to the proliferation of AI-based tools, but maybe I'm comparing apples with oranges? I'm interested to hear what others think about this example. Maybe there's some hope after all.
As a sidenote, I do think that some of the damage to the translation industry could have been mitigated if the translators would have actually fought back instead of just accepting the terms dictated by the big agencies. Unions in Germany are still comparatively strong, and there's a huge trade union that would have helped translators fight for better working conditions if they had been willing to become members. But alas, I don't know a single one who did, apart from myself.
[link] [comments]
Capitalism is all about the free market, which means the level of innovation depends on demand and supply. And when there's high demand, companies get into fierce competition with each other, which is why we get new and improved products like the iPhone every year. But it's important to note that the middle class makes up the majority of buyers in this system. So, if AI takes over middle class jobs, these buyers won't have the same purchasing power, and that could cause a drop in innovation and profits for big companies .
[link] [comments]











A new international study finds that the growth and development benefits of children living in cities may have diminished in the past two decades

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

Abstract

A new international study finds that the growth and development benefits of children living in cities may have diminished in the past two decades

Six Figures
While the rest of the world is rightfully concerned about
coming to take their jobs, some companies are offering six-figure salaries to a select few who are great at wringing results out of next-gen AI chatbots.
As Bloomberg reports, some companies are offering salaries of up to a sizzling $335,000 per annum for so-called "prompt engineer" positions. In essence, these are supposed to be ChatGPT wizards who are so good at the tech that they can train other people on how to use it more effectively.
Albert Phelps, one of these lucky prompt engineers who works at a subsidiary of the Accenture consultancy firm in the UK, told Bloomberg that the job entails being something of an "AI whisperer," regardless of educational background, with folks who have degrees as disparate as history, philosophy, or English.
"It's wordplay," he said. "You're trying to distill the essence or meaning of something into a limited number of words."
Phelps himself, at only 29, studied history before beginning a career in financial consulting and ultimately pivoting to AI. On a given day at his job, the youthful AI wizard and his colleagues will write about five different prompts and have 50 individual interactions with large language models like ChatGPT.
Growth Market
While there's been a proliferation of low-hanging fruit gigs — there's even a freelance prompt engineer marketplace similar to Fiverr called PromptBase, where people sell their prompt-writing skills for $3-10 a pop — those who get particularly good at writing prompts can draw huge salaries.
Just take it from Mark Standen, the owner of an AI, automation, and machine learning staffing business in the UK and Ireland, who told Bloomberg that prompt engineering is "probably the fastest-moving IT market I've worked in for 25 years."
"Expert prompt engineers can name their price," Standen said, noting that while the jobs start at the pound sterling equivalent of about $50,000 per year in the UK, there are candidates in his company's database looking for between $250,000 and $360,000 per year.
The Bloomberg report notes that naturally, there's no way to know when or if the bubble will burst on this newly-created title, but all the same, earning a six-figure salary to write chatbot prompts, even for just a year or two, sounds a lot better than losing your job to AI.
More on chatbots: That Startup Run by ChatGPT Doesn't Seem to Be Doing So Great
The post Companies Are Paying Bonkers Salaries for People Good at ChatGPT appeared first on Futurism.

wants their customers to know that they should be very excited about generative AI — because generative AI tools could let 'em fire human workers and replace them with AI en masse. Hooray!
"The recent emergence of generative artificial intelligence (AI) raises whether we are on the brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity," reads a new Goldman Sachs economic report, published over the weekend.
"Despite significant uncertainty around the potential of generative AI," it continues, "its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects."
Translation? While the future of generative AI is still up in the air, right now, its output is already comparable, in the bank's eyes, to the output of enormously costly human labor. Replace humans with machines, and you no longer have to pay for livelihoods — just subscription fees.
Perhaps most chillingly, the Goldman Sachs report describes generative AI as a "disruption" to the "labor market." And sure, it may well be, but the term "disruption" is usually used to describe significant changes within certain industries, particularly when something new replaces and upends something old. Goldman Sachs is seemingly speaking to the concept of human jobs altogether — a presumably less manageable workforce shift.
In other words, a significant disruption indeed, and one that will cause a significant amount of pain for the masses in turn.
"Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work," reads the report. "Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300 [million] full-time jobs to automation." Gulp.
The report did throw us peasants a bone, echoing a common refrain heard among AI optimists: that AI will replace jobs, but that human society has always adjusted to new technologies over time. Laborers flock to different jobs, and at the same time, new jobs are created, in the short term as well as in the long run. Look how far we've come since the birth of the computer, after all.
"The good news," reads the report, "is that worker displacement from automation has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth."
Historically speaking, it's a fair point — but one, we gotta say, that definitely dissociates a bit from the immediate pain that eliminating an estimated 300 million jobs will cause. There's also the reality that those who repeat that line — OpenAI CEO Sam Altman included in that line-up — have little to offer in terms of what those generative AI-created roles might look like, and when we might expect to see them crop up.
We're all walking into a new technological unknown, and we can't expect AI innovators to have all of the answers. But still, it's hard to take Goldman Sachs' rosy "good news" outlook seriously when it tends to read more like an excuse to do something, like intentionally eradicate hundreds of millions of jobs, rather than a sound, substance-driven reason that goes beyond just keeping cash in a company's bank, as opposed to its workers' pockets.
And to that end, the Goldman Sachs report did round out that "good news" nugget with a sobering piece of economic wisdom: that while there might be an economic growth-inducing "productivity boom" driven by "the combination of significant labor cost savings, new job creation, and higher productivity for non-displaced workers," the "timing of such a boom is hard to predict."
So, you know, good stuff will happen for everyone, probably — just can't tell ya when!
None of this is all that surprising, given that most big employers out there are constantly looking to cut costs of all kinds, labor included. And elsewhere, technology, as mentioned above, has always changed industries, and with that, changed, eliminated, and created jobs.
But that takes time, and generative AI is moving at a lightning-fast pace, with an incredible amount of money behind it to boot. Only time will tell how quickly the "good news" that Goldman Sachs and others have promised will follow.
READ MORE: The Potentially Large Effects of Artificial Intelligence on Economic Growth [Goldman Sachs]
More on banks and generative AI: Bank of America Obsessed with AI, Says It's the "New Electricity"
The post Goldman Sachs Salivates at AI's Potential to Mass Fire Workers appeared first on Futurism.


/https://tf-cmsv2-smithsonianmag-media.s3.amazonaws.com/filer_public/72/ae/72aed4ff-1d5f-49f6-b214-ebc3238e5d80/rhinesuchus1db_web.jpg)





Be Afraid
The CEO of
has admitted repeatedly that he's scared of the tech his company is cooking up — but he doesn't think you should make fun of him for it.
"I think it's weird when people think it's like a big dunk that I say, I'm a little bit afraid," OpenAI CEO and noted doomsday prepper Sam Altman told podcaster Lex Fridman in an episode dropped this past weekend. "And I think it'd be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid."
While Altman iterated during his Fridman show appearance that his concerns are primarily "disinformation problems or economic shocks" and not algorithmic "superintelligence," he has said a bunch of stuff recently that suggests that he's more than a little wigged out about AI.
Take, for instance, his recent comments to ABC News: "A thing that I do worry about is… [OpenAI is not] not going to be the only creator of this technology."
"There will be other people who don't put some of the safety limits that we put on it," Altman added.
Take Care
While it seems legit to worry about less-ethical competitors (which is kind of ironic given everything we know about OpenAI) or about the "potentially scary" AIs that will follow his company's current offerings, the comments he's referring to — when he told Fox News that it's a good thing that he has trepidations about what he's created — are pretty eyebrow-raising, even in spite of his attempts to downplay them.
"We've got to be careful here," Altman told the news network earlier in March. "I think people should be happy that we are a little bit scared of this."
While it certainly is good that there are concerns at the top of OpenAI about what may come of artificial intelligence, it doesn't exactly inspire confidence that the CEO has been repeatedly quoted saying he's scared of it — and no amount of couching language will change how weird or funny that is, because if we can't laugh while the world burns, then what else can we do?
More on AI feelings: CEO of OpenAI Says Elon Musk's Mean Comments Have Hurt Him
The post OpenAI CEO: It's Not Funny That I'm Afraid of the AI We're Creating appeared first on Futurism.










Nature Communications, Published online: 29 March 2023; doi:10.1038/s41467-023-37248-8
The SHANK3 gene is linked to autism spectrum disorder and Phelan McDermid syndrome, which have been associated with social memory deficits. Here, authors show activation of the hippocampal CA2-ventral
This novel injection system could help advance gene therapy by nimbly inserting gene-editing enzymes into a variety of cell types














As part of its effort to prod the construction industry to go green, the Biden administration is providing new funding for rebuilding with low-carbon materials after disasters

This novel injection system could help advance gene therapy by nimbly inserting gene-editing enzymes into a variety of cell types

The asteroid Didymos witnessed its companion get slammed by NASA's DART spacecraft, and Didymos itself may have interesting activity







Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00730-w
A generalizable technique has been developed to create diverse functional inorganic membranes on the surface of various aqueous solutions. The technique ensures that the air–liquid interface receives a continuous supply of floating particles, which then assemble dynamically to form continuous membranes.
Spring started a little more than a week ago, and the Northern Hemisphere has begun to warm; flowers and trees are blooming. Gathered below are some recent images of people enjoying themselves among groves of flowering cherry-blossom trees in Tokyo; Munich; Washington, D.C.; and more—signs of warmer days to come.






Scientists have long been aware of a worrying link between
.
But now, in a new study published in the European Heart Journal, researchers have mapped how high blood pressure affects specific regions of the brain, unearthing the best evidence yet that the two conditions are indeed connected.
In fact, high blood pressure may even be a direct cause of brain dysfunction, they suggest.
"Our study has, for the first time, identified specific places in the brain that are potentially causally associated with high blood pressure and cognitive impairment," said study lead author Mateusz Siedlinski, a researcher at the Jagiellonian University Medical College, in a press release.
Siedlinski and his team parsed through MRI brain scans paired with genetic data from over 30,000 patients in the UK Biobank medical database.
By using Mendelian randomization, a technique that eliminates confounding factors by focusing on genes that could predispose a person to certain conditions, the team found that nine parts of the brain exhibited changes related to both higher blood pressures and a decline in cognitive function.
To eliminate any room for doubt, the researchers then double-checked their findings with a separate batch of patients in Italy.
"In our study, if a gene that causes high blood pressure is also linked to certain brain structures and their function, then it suggests that high blood pressure might really be causing brain dysfunction at that location, leading to problems with memory, thinking and dementia," explained study co-author Tomasz Guzik, a professor of cardiovascular medicine at the University of Edinburgh, in the release.
There is one nagging caveat, though: since most of the participants were white and middle-aged, the researchers note that it "might not be possible to extrapolate the findings to older people," according to the statement.
Nevertheless, of the parts they observed to be affected, the putamen and the anterior thalamic radiation were the most notable. The putamen, located in the front of the brain, is responsible for controlling movement and facilitating learning, while the latter is in charge of executive functions like planning tasks. Various regions of white matter, the tissue responsible for connecting different parts of the brain, were also found to be affected.
The researchers estimate that around 30 percent of the global population suffers from high blood pressure, which, based on their findings, puts a hefty helping of us at risk of dementia.
Troubling a find as it may be, it's no doubt a welcome clue in treating the condition, especially since the cause of its most devastating form, Alzheimer's disease, remains largely unknown.
"It has been known for a long time that high blood pressure is a risk factor for cognitive decline, but how high blood pressure damages the brain was not clear," said co-author Joanna Wardlaw, a neuroscientist at the University of Edinburgh, in the release.
"This study shows that specific brain regions are at particularly high risk of blood pressure damage, which may help to identify people at risk of cognitive decline in the earliest stages, and potentially to target therapies more effectively in future," she added.
More on dementia: Paparazzi Won't Stop Harassing Bruce Willis Even Though He Has Dementia
The post High Blood Pressure May Cause Dementia, Scientists Say appeared first on Futurism.



Extra Grounded
Disgraced cryptocurrency exchange CEO Sam Bankman-Fried just moved into a new circle of personal house-arrest hell.
Until now, the former head of
has enjoyed a relatively cozy at-home arrest at his parent's $3.5 million-plus Palo Alto home, spending his days playing with his new guard dog, writing long-winded Substack posts about how FTX was actually still solvent at the time of his arrest, and so on.
Most importantly, he's been able to play the popular video game
, his passion for which is, for lack of a better word, legendary. The former CEO has taken to Twitter a number of times to discuss his obsession with the cult-status game, and, according to lore, once played the game throughout the duration of a massively important pitch meeting with the investment firm Sequoia.
But sadly, it appears that his League of Legends days are officially over.
On Monday, prosecutors in the case against the former crypto wunderkind issued stricter bail requirements, which, among a number of other restrictions, bar the accused fraudster from playing any videogames that, like League of Legends, "permit chat or voice communication."
Brutal.
Flip Phone
The new court-issued guidelines also restrict the founder to the use of a single, court-issued laptop, on which Bankman-Fried is only allowed to browse "pre-approved" websites, a list that includes all ".gov" sites as well as YouTube, Wikipedia, and several blockchain-trackers. Most major news sites are also still on the table, as are food delivery services — a bright spot on a dark day for the Doordash-stanning Bankman-Fried.
His smartphone has also been taken and replaced with an Uncle Sam-approved flip phone. And though his parents, who are both Stanford professors, are allowed to keep and use their personal devices freely, their son is NOT to access them, and all visitors to the house have to agree to hand their devices over to security guards upon entering the home.
We'd be lying if we said we weren't a little bit bummed about the fact that SBF is no longer allowed on Substack, as we would have loved reading a rambling SBF newsletter about those new charges accusing him of spending $40 million to successfully bribe Chinese authorities — a revelation that we're sure has nothing to do with this newly-tightened government leash.
But allegations aside, Futurism is not on the government-approved list, so SBF likely won't be reading this anyways.
READ MORE: SBF Cut Off From His Favorite Toys Under New Proposed Bail Conditions [Gizmodo]
More on SBF: FTX's New CEO Says Company Did "Old Fashioned Embezzlement"
The post FTX Founder Suffers
Cut Him Off From League of Legends appeared first on Futurism.















13. Fruktansvärda Monster Utdrag: "Men idag är det få som tvivlar på att det finns liv i rymden. Till och med de snustorra skeptikerna i föreningen Vetenskap och folkbildning skriver … Continued
Inlägget Fruktansvärda (podcast), 29 mars 2023 dök först upp på Vetenskap och Folkbildning.


Nature, Published online: 28 March 2023; doi:10.1038/d41586-023-00921-5
Space telescope finds exoplanet TRAPPIST-1b probably doesn't have an atmosphere. Plus, the long road to treatments for chronic pain, and how fairness can foster research integrity.
Nature, Published online: 27 March 2023; doi:10.1038/d41586-023-00881-w
Complexes of DNA and dye send a light signal in the presence of strontium ions created by nuclear power plants.
Nature, Published online: 27 March 2023; doi:10.1038/d41586-023-00902-8
Reducing inequality could see the world population fall to 6 billion people. Plus, fish can sense each other's fear, and how driverless cars can learn to handle the worst drivers on the road.
Following an investigation launched by
, a committee recommended pulling several papers by lung-disease researcher Augustine M. K. Choi, who served as dean of Weill Cornell Medicine until this year, Retraction Watch has learned.
Choi's latest retraction, which brings him up to three so far, came on March 15, when The Journal of Clinical Investigation pulled "UCP2-induced fatty acid synthase promotes NLRP3 inflammasome activation during sepsis." The paper has been cited 178 times, according to Clarivate's Web of Science.
The retraction notice reads:
Cornell University, Harvard Medical School, and Brigham and Women's Hospital jointly notified the JCI that Figures 3A, 4D, 7B, and 8B and Supplemental Figures 4 and 7A are not reliable. In accordance with the institutional recommendations, the JCI is retracting this article.
Choi served as dean of Weill Cornell Medicine and provost of medical affairs at Cornell University from 2017 to 2022. He has been a professor of both medicine and genetic medicine at Weill Cornell Medical College since 2014.
As we reported earlier, Choi had a paper retracted and then republished in 2015. Five years later, in 2020, he lost another article "due to inclusion of data that were published previously … and represent different results." That same year, commenters on PubPeer started flagging potential image problems in several of Choi's other publications.
Following the latest retraction, we reached out to Cornell, Harvard, and Brigham and Women's for comment on the recommendations the notice refers to. No one from Harvard replied and a spokesperson from Brigham and Women's declined to comment, deferring to Cornell. Joel M. Malina, Cornell University vice president for university relations, told us:
In 2020, Cornell learned of data integrity inquiries involving research published out of Dr. Choi's laboratory. Cornell launched an independent investigation at that time, conducted by a committee of preeminent scientists from outside of Cornell and led by external counsel, which resulted in a detailed analysis of a number of publications and a thorough review of lab practices. The final report found that Dr. Choi did not commit research misconduct. The report made recommendations for retraction of several papers because certain figures were not reliable and, given the absence of original data, the scientific record could not be corrected. Retraction is consistent with academic norms in such circumstances.
Malina did not directly say whether Choi's leaving of his dean and provost positions was influenced by the investigation, but he said that Choi ultimately chose not to seek another term in these positions. Malina also highlighted the contributions Choi has made to Cornell, including his leadership during the pandemic and establishing a debt-free tuition plan for medical students.
Choi did not respond to an email from Retraction Watch.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that's not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.
Nature Communications, Published online: 29 March 2023; doi:10.1038/s41467-023-37483-z
We address a controversy over use of the term "gene drive" to include both natural and synthetic genetic elements that promote their own transmission within a population, arguing that this broad definition is both practical and has advantages for risk analysis.
Nature Communications, Published online: 29 March 2023; doi:10.1038/s41467-023-37463-3
The neural mechanisms underpinning ketamine's dissociative and antidepressant effects remain poorly understood. Here, the authors analyzed ketamine-induced brain dynamics with intracranial recordings in humans and found that ketamine engages different brain areas in distinct frequency-dependent patterns that may relate to its dissociative and antidepressant effects.









The asteroid Didymos witnessed its companion get slammed by NASA's DART spacecraft, and Didymos itself may have interesting activity

Are you curious about what the future might hold for us? From advancements in technology and space exploration to shifts in social and environmental landscapes, futurology offers a glimpse into the possibilities that await us. Check out my link to explore the latest trends and predictions in futurology, and discover how they could shape our world in the coming years. With so much on the horizon, now is the perfect time to dive into this exciting field and prepare for what's to come. Don't miss out on the opportunity to expand your horizons and explore the potential of the future.
[link] [comments]
I see the argument of "robots will automate jobs so capitalism can't survive" a lot, and tbh i think it's a very weak argument. However i am open to being wrong and would more than welcome any counter arguments.
just because no one is working doesn't mean that capitalism will suddenly collapse. that makes 0 sense and i can't see why anyone would think that. To me it feels like wishful thinking or maybe just ignorance of reality. Companies will still sell products, bills will still have to be paid, money will still have to be exchanged (for example, for robotaxis). All of these will cost money and require capitalism in order to operate. And before anyone says "the government will provide everything" planned / government run economies don't work and history shows that, and if everything was run by the government that would basically stifle innovation and destroy the middle and upper classes.
In short, capitalism just ain't going anywhere, despite what some may want to believe.
[link] [comments]
![]() |
I don't trust Elon Musk. He's not an engineer, nor is he intelligent. His best quality is his ruthlessness, so anything that he wants is most likely bad for the common people. [link] [comments] |


- Last year, US president Joe Biden released a draft of an AI bill of rights, which would allow citizens to opt out of AI algorithms making decisions, but experts criticized the proposal for being toothless .

An open letter, signed by over 1,100 artificial intelligence experts, CEOs, and researchers — including SpaceX CEO Elon Musk — is calling for a six-month moratorium on "AI experiments" that take the technology beyond a point where it's more powerful than OpenAI's recently released GPT-4 large language model.
It's a notable expression of concern by a veritable who's who of some of the most clued-in minds working in the field of AI today, including researchers from Alphabet's DeepMind and the "godfather of AI" Yoshua Bengio.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," reads the letter, issued by the nonprofit organization Future of Life, adding that "advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."
"Unfortunately, this level of planning and management is not happening," the letter continues, "even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control."
The letter questions whether we should allow AI to flood the internet with "propaganda and untruth" and take jobs away from humans.
It also references OpenAI CEO Sam Altman's recent comments about artificial general intelligence, in which he argued that the company will use AGI to "benefit all of humanity," sentiments that were immediately slammed by experts.
The six-month pause the experts call for should be used to "develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," they write.
"AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal," the letter reads.
That kind of concern is echoed by governments around the world, which are struggling to get ahead of the problem and address the regulation of AI in a meaningful way. Last year, US president Joe Biden released a draft of an AI bill of rights, which would allow citizens to opt out of AI algorithms making decisions, but experts criticized the proposal for being toothless.
And it's not just governments. Musk, who helped found OpenAI in 2015 before leaving over ideological differences three years later, has repeatedly voiced concern over overly powerful AI.
"AI stresses me out," the billionaire told Tesla investors earlier this month, clarifying later that he's a "little worried" about it.
"We need some kind of, like, regulatory authority or something overseeing AI development," Musk added at the time. "Make sure it's operating in the public interest. It's quite dangerous technology. I fear I may have done some things to accelerate it."
With OpenAI, a company that transformed from a non-profit to a for-profit after Musk left, it's not a stretch to see the potential dangers of a profit-driven model of AI development.
Whether the company is acting in good faith or hunting multibillion-dollar deals with the likes of Microsoft to maximize profits remains as murky as ever.
And the danger of a runaway AI that does more harm than good is more prescient than one might think. The current crop of AI models, such as GPT-4, still have a worrying tendency to hallucinate facts and potentially mislead users, an aspect of the technology that clearly has experts spooked.
More on AI: Levi's Mocked for Using AI to Generate "Diverse" Denim Models
The post Huge Group Calls for Temporary Pause on AI More Advanced Than GPT-4 appeared first on Futurism.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05992-y
All-perovskite tandem 1 cm2 cells with improved interface quality
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00926-0
Repurposing a microbial system to deliver molecules directly into cells, and the disconnect between research into, and treatment of, chronic pain.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05880-5

Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05864-5
Tracking the formation of cubic ice (ice Ic) using transmission electron microscopy and low-dose imaging shows preferential nucleation of ice Ic at low-temperature interfaces and two types of stacking disorder.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05772-8
The advantage of living in cities compared with rural areas with respect to height and BMI in children and adolescents has generally become smaller globally from 1990 to 2020, except in sub-Saharan Africa.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05730-4
The gluonic gravitational form factor of the proton was determined using various models, and these analyses showed that the mass radius of the proton was smaller than the electric charge radius.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05754-w
Analysis of ancient human DNA from the Swahili coast reveals that predominantly African female ancestors and Asian male ancestors formed families after around ad 1000 and lived in elite communities in coastal stone towns.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05759-5
Chips with 256 × 256 memristor arrays that were monolithically integrated on complementary metal–oxide–semiconductor (CMOS) circuits in a commercial foundry achieved 2,048 conductance levels in individual memristors.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05841-y
The cryo-electron microscopy structure of the Polycomb repressive deubiquitinase (PR-DUB) in complex with the H2AK119ub1 nucleosome provides insight into how the substrate specificity of PR-DUB is achieved.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05761-x
Analysis of observations from the Atacama Large Millimeter/submillimeter Array showed evidence of the thermal Sunyaev–Zeldovich effect in the direction of the Spiderweb protocluster at a redshift of 2.156.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05872-5
The cryo-EM structure of the
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05891-2
Pancreatic ductal adenocarcinoma cells show a specific dependency on
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05832-z
The authors report the cryogenic electron microscopy structures of human
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05851-w
The murine norovirus NTPase NS3 induces mitochondrial disruption, resulting in cell death, which is required for viral egress.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05809-y
By switching the nucleation preferences in aqueous systems of inorganic precursors to bias formation and growth at the air–liquid interface, the mechanistic formation of inorganic membranes from the floating-particle system is demonstrated.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05844-9
A machine learning approach is used to analyse multi-omics (proteomics, metabolomics and transcriptomics) data, producing genetic scores for more than 17,000 biomolecular traits in human blood, and identifying possible associations with disease.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05870-7
The tail fibre of an extracellular contractile injection system (eCIS) from Photorhabdus asymbiotica recognizes targets expressed on eukaryotic host cells, and can be reprogrammed to target specific organisms and cell types for delivery of novel protein payloads.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05762-w
Simulations show that projected increases in Antarctic meltwater will slow down the abyssal ocean overturning circulation over the coming decades and lead to warming and ageing of the ocean abyss.
Nature, Published online: 29 March 2023; doi:10.1038/s41586-023-05869-0
A multiomics single-cell atlas of the human maternal–fetal interface including the myometrium, combining spatial transcriptomics data with chromatin accessibility, provides a comprehensive analysis of cell states as placental cells infiltrate the uterus during early pregnancy.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00721-x
Genetic scores for predicting levels of several types of biomolecule have been developed and validated in people of diverse ancestries, and used to uncover insights into disease biology. An open resource to disseminate these scores, OmicsPred, will enable researchers to predict various molecular traits from genetic profiles in their own data sets.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00302-y
Analyses of ancient DNA from 80 individuals buried in medieval Swahili stone towns along the East African coast revealed that these individuals had both African and Asian ancestry. The findings suggest that in most cases, African women began having children with Asian men at least 1,000 years ago, at several locations along the coast.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00835-2
Simulations show that the melting of Antarctic ice reduces the production of deep water that stores heat at the bottom of the Southern Ocean. Comprehensive models could reveal whether the trend will persist.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00922-4
Technique borrowed from nature, and honed using artificial intelligence, could spur the development of better drug-delivery systems.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00724-8
The number of distinguishable conductance levels in memristor devices — electronic components that store information without power — has been limited by noise. An understanding of the source of the noise, and development of an effective denoising process, have now enabled 2,048 conductance levels to be achieved in memristors in large arrays fabricated in a chip factory.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00875-8
Galaxy clusters are among the largest objects in the Universe to be held together by their own gravity. Most of the ordinary matter in nearby galaxy clusters is associated with a diffuse, hot-gas component. The detection of this 'intracluster medium' in a distant protocluster of galaxies sheds light on the cluster's formation history.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00847-y
An injection system from bacteria has been re-engineered in an effort to develop a programmable system for protein delivery into cells. Its customizability opens the door to a multitude of biomedical applications.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00305-9
An unprecedented data set of the body measurements of 71 million children and adolescents reveals that, in most countries, growing up in cities no longer results in the height advantage seen in most of the world in the 1990s. However, in much of sub-Saharan Africa, the growth and development advantages of urban living have been amplified.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00836-1
The size of the space taken up by a proton's mass has been measured, and it's much smaller than previously thought. The result is a key step towards understanding the complex structure of this fundamental building block.
Nature, Published online: 29 March 2023; doi:10.1038/d41586-023-00848-x
Unusual metabolic pathways used by






Barn med autism har redan som bebisar en annorlunda aktivitet i hjärnan när de tittar på mönster och saker som rör sig. Kunskap om skillnaden kan leda till bättre förståelse av den tidiga utvecklingen vid autism.
Inlägget Autism märks redan hos spädbarn – rörelser uppfattas annorlunda dök först upp på forskning.se.


Scientific Reports, Published online: 29 March 2023; doi:10.1038/s41598-023-32251-x
Retraction Note: The









the gun heard the first shot the gun thought it was a bursting pipe the
gun heard the second shot and the third and the fourth the gun real-
ized this was not a pipe the gun's teacher told everyone to get on the
ground the gun's teacher went to lock the door the gun saw glass
break and the teacher slump and bleed and fall silent the gun
texted its parents and said i love you i'm so sorry for any trouble i've
caused all these years you mean so much to me i'm so sorry the gun
thought it would never leave the classroom the gun moved to a closet
filled with several other shaking guns the gun texted its best friends in
the group chat to see if they were okay the gun waited on a response
the gun received one the gun did not receive another the gun waited
for an hour the gun heard the door kicked open the gun was still in
the closet and didn't know who had entered the room the gun thought
this was the end the gun thought of prom and graduation and college
and children and all the things the gun would never have the gun heard
more bullets the gun heard he's down! the gun climbed out of the closet
the gun put its hands on its head the gun walked outside the gun
saw the cameras the gun hugged its sobbing mother and cried into her
arms the gun heard thoughts and prayers the gun heard Second Amend-
ment the gun heard lone wolf the gun texted its friend again the gun
waited for a message the message never came

Lo-fi chill music was playing in the distance. Shooting stars sliced through the sparkling galaxy overhead. I was defying physics, hovering in space, on my back. Relaxed, I yawned and stretched, my fist punching a pillow that I had forgotten about.
I was, of course, not in space. Physically, I was on a chaise in my home. Virtually, I was in one of many "sleep rooms" on the virtual-reality platform VRChat—virtual spaces where people can relax, and even sleep, with their headsets on. VR sleep rooms are becoming popular among people who suffer from insomnia or loneliness, offering cozy enclaves where strangers can safely find relaxation and company—most of the time.
Each VR sleep room is created to induce calm. Some imitate beaches and campsites with bonfires, while others re-create hotel rooms or cabins. Soundtracks vary from relaxing beats to nature sounds to absolute silence, while lighting can range from neon disco balls to pitch-black darkness. The opportunity to sleep in groups can be particularly appealing to isolated or lonely people who want to feel less alone.
That's the case for Mydia Garcia, who began social sleeping almost a year ago: "I'd go dancing [in VR] till 3 a.m., and I was tired but I didn't want to leave VR or my friends." Garcia and their friends would visit secluded worlds and then cuddle together, finding the experience therapeutic and bonding.
Likewise, Jeff Schwerd discovered sleep rooms during the pandemic and found an antidote to loneliness. He likes to snuggle with strangers and often uses full-body tracking, which allows avatars to move in sync with IRL bodies, to imitate the feeling of being cuddled and held. Schwerd says it makes him feel protected and so more able to sleep. He finds the atmosphere of sleep rooms relaxing, too.
"My favorite place to relax alone is this grassy hill with a campfire," he says. "I like hearing the sound of the fire."
The company is not the only reason people fall asleep in VR. Scott Davis uses VRChat sleep rooms multiple times a week to fight his insomnia. "It's so much easier to sleep in VR for me, and it has helped me get sleep more reliably," he says. "Normally, outside of VR, I need to be quite fatigued to fall asleep. But in VR, I can go and lie down and fall asleep faster, even if I'm not tired at first."
It's why he returns to sleep rooms. "I can feel confident that I am controlling my sleep as an insomniac," Davis says.
That feeling of control is a huge reason why VR can have a therapeutic effect for people with insomnia, says Massimiliano de Zambotti, a neuroscientist who researches sleep at the nonprofit SRI International.
"If you have insomnia, you go to bed and your brain starts spinning. You have worries and ruminations and your heart is racing. You're not relaxed and in an elevated state of arousal, which prevents you from falling asleep," de Zambotti says. "Neuroscientifically, VR works because you can modulate the environment you are in, but you have an anchor to reality and can feel safe enough to fall asleep."
The trouble is, what if the experience doesn't make you feel that way?
Feeling safe is crucial for relaxation and sleep, even if you are alone in your own bed at home.
I entered a sleep room one day and immediately heard the voice of a child in my ear. The kid, who had a robot avatar, tried and failed to engage me and a medieval knight in conversation. (My avatar was a stick of butter with a tiny top hat, because why not?) Exasperated, the robot floated over to the corner where about seven avatars were peacefully lying together, seemingly asleep. The child's voice then taunted them: "I will kill you. I will literally kill you."
It's well known that the metaverse is full of underage users, and my journey through sleep rooms confirmed that kids pop up disturbingly often in these adult spaces. Another sleep room I visited was overrun with childlike voices speaking Spanish and French. I took an elevator up to a "roof" where I found a corner illuminated in red lights with plush, velvety couches. "Hi, I like your avi [avatar]," a kid's voice said behind me. I swiveled around to find another robot avatar talking to what appeared to be a scarecrow. "I like yours too," a man's voice said. "Wanna cuddle?" The child floated away and I followed suit, unnerved.
Schwerd told me that he'd seen kids in sleep rooms, too. "You definitely get underage people being a nuisance," he says. But he insisted that most sleep rooms were quiet and "respectful."
As I roamed around, I mostly found this to be true. Some sleep rooms I stumbled into were empty and silent. Others had avatars nestled against each other, fast asleep. Still others had groups of avatars huddled together, awake but quiet, some whispering, others just relaxing. I often felt the need to mutter "Excuse me" and tiptoe, forgetting that since I was a drifting stick of butter in a room full of avatars, few would hear me or care.
I couldn't fall asleep in VR. I was extremely aware of my surroundings and found the headset on my face uncomfortable. But while I found some rooms to be disturbing, I did discover sleep rooms that were hushed and peaceful, places to simply sit and be. In the real world, I struggle to find quiet places to relax in, and if nothing else, virtual sleep rooms offered me space and time to lie back and stare at the stars.

As the weather gets hotter, preschoolers get less exercise outside, research finds.
The findings, published last week in the Journal of the American Medical Association Pediatrics, may seem obvious to anyone who's watched their little ones wilt and turn red-faced on the playground. But researchers approached the question in a novel way: they used advanced wearable activity monitors to follow specific children in a specially designed laboratory school for studying child behavior and development.
"Given that we're seeing an increasing number of hot days," says Andrew Koepp, lead author of the paper and PhD candidate in human development and family sciences at the University of Texas at Austin, "it's important to understand how young children's activity changes so that we can take steps to make sure they are getting the physical activity they need to be healthy."
The study looked at children ages three to six who are enrolled at the Priscilla Pond Flawn Child and Family Laboratory School on campus. The study used digital wearable activity monitors on each child for a week in April 2022 as the children played on the school's partially shaded playground. Ambient air temperatures were between 72 degrees and 95 degrees Fahrenheit.
When temperatures were at 72 degrees, children engaged in moderate to vigorous physical activity (MVPA) for 27% of their outdoor play time. MVPA is defined as raising the heart rate to the point where a person can speak, but would have a hard time singing. When temperatures rose to 95 degrees, MVPA dropped to 21% of outdoor play time. Additionally, sedentary time, where children were sitting or still, rose on hotter days from 62% on play time at 72 degrees to 70% of playtime at 95 degrees.
The Centers for Disease Control recommends that children between the ages of three and five be physically active throughout the day, as physical activity improves young children's development, academic performance and learning, and helps build strong muscles and bones, contributes to healthy weight and blood sugar levels, and improves cardiovascular, lung, and mental health.
The sample at the laboratory school is made up of healthy, middle- and upper-income children, but, as laboratory school director Amy Bryan, an associate professor of practice in human development and family sciences, explains: "These effects would likely be amplified in more vulnerable children who had health issues or other risk factors for low activity, as would the impacts on their development."
Indoor play is not a reasonable substitute for outdoor play, says coauthor Liz Gershoff, a professor of human development and family sciences.
"Other research by our team has shown that children are much more physically active outside than inside," Gershoff notes. "This is likely in large part because classrooms have more expectations about behavior indoors than outdoors. There may be some indoor and air-conditioned settings that are conducive to physical activity, such as indoor gyms and play places, but those may not be available in all areas."
Because young children may be less aware of things such as the need to slow down and drink water, heat can be especially dangerous for them. Adults monitor children for signs of distress and take steps to keep kids safe.
Playgrounds may also have to change, Koepp says, by adding more features like shade sails, heat reducing materials, fans, misters, and water play areas. Play times may also have to move to earlier in the morning when it's cooler.
"A lot of policies are focused on stopping climate change, but in many ways, it's already here," says Koepp. "We need to start looking at policies, practices, and recommendations that recognize that, if we're going to limit the impacts on the next generation."
Coauthors are from the University of Texas Health Science Center. Houston School of Public Health Austin Campus also contributed to the research. The research had funding from the National Science Foundation, the National Academy of Education/Spencer Foundation, the Graduate School of The University of Texas at Austin, and the Eunice Kennedy Shriver National Institute of Child Health and Human Development.
Source: UT Austin
The post As temps rise, preschoolers get less exercise outside appeared first on Futurity.








That startup being run by
is still going — but from the outside, it looks like kind of a mess.
To recap: about two weeks ago, self-described "AI soothsayer" Jackson Greathouse Fall took to Twitter to announce that he would be commencing what he described as the #HustleGPT challenge. Basically, Fall would ask ChatGPT to start a business, with only $100 to start. He'd get directions from ChatGPT on how to grow and scale the business, and act as the bot's "human liaison," to follow its bidding.
The project went viral, with AI bros calling it a "god mode" use of ChatGPT, and at first Fall was posting regular updates that gave a sense of momentum. But at this point, he hasn't posted a new update since last Wednesday — so we're starting to wonder what's going on with the project.
For one thing, even when he was still posting updates, progress was sounding iffy. His last update last week, for instance, claimed that the company — a sustainable e-commerce website called Green Gadget Guru that ChatGPT directed him to set up — had generated $130 in revenue. Sure, that would be something, but assuming that Fall was working eight-hour days on the project, it still means that as of that update, he'd been making less than $3 per hour. At that rate, it'd be a long road toward making good for his investors, who he said had contributed around $7,700, nevermind turning a profit.
In theory, a site like Green Gadget Guru could be roaring along by now. ChatGPT basically told Fall to spin Green Gadget Guru up as an affiliate marketing site, meaning it would recommend products and get a small kickback if anyone bought them. That's an extremely well-established niche, and all the tech exists to quickly fire up a new affiliate operation.
At first, it did sound like Fall was making concrete progress. He announced last week that he — sorry, ChatGPT — had hired two humans to help with the business: a content writer tasked with using ChatGPT to generate blog posts for the company website and a graphic designer who would use the text-to-image generator Midjourney to create the site's imagery.
Judging by the status of the Green Gadget Guru site, though, ChatGPT seems to be struggling severely. Frankly, it's a bit wretched. Seriously. Check it out yourself.
For one thing, despite Fall's talk of that human writer producing copy for the site, the only blog posts it's published so far — like this one, "Ten Eco Friendly Kitchen Gadgets" — contain only the "lorem ipsum" text that designers use to test a layout before it has any content. If you think about it, it's also a little strange that ChatGPT told Fall to hire a human writer at all. Why can't ChatGPT write the blogs and copy itself?
Another glaring issue is that there don't seem to be any actual products on the site. There are technically product categories, but for some reason all of them — Electronics, Home & Garden, Office Supplies, Personal Care, and Kitchen — are represented by the same picture of a green t-shirt. But nothing seems to happen when you click on them.
There is a "featured product," which the site lists with the amazingly generic title of "eco-friendly water bottle," but the so-called water bottle is similarly represented by a picture of a green tote bag, and there doesn't seem to be any way to actually buy it. ("It" being either the water bottle or the tote bag.)
Particularly weird is what we can't find on the site: a single affiliate link to any actual product. And that raises a further question: without them, how did the site generate that piddling $130 in revenue that Fall was talking about last week?
Fall didn't address that question when we asked, but he did write us a brief message addressing the status of Green Gadget Guru.
"AI directed website moving at human speed," he said. "Promise y'all will have more updates soon, I know everyone's waiting on the edge of their seats."
He also pointed us to a tweet in which he seemed to acknowledge that progress had been slow.
"I gotta be real with y'all, last week kicked my ass," it read. "I'm still learning how to context-shift multiple times a day and get all my work done."
"More GGG + HustleGPT progress threads tomorrow!!!" he added (no new thread appeared the next day.)
Other netizens, we should note, have their own misgivings about the ChatGPT-run Green Gadget Guru.
"The unfortunate reality is that HustleGPT kinda started as a grift. The guy didn't even fix up the website he took like $7,500 in investments for," tweeted Dave Craige, a #HustleGPT enthusiast and Green Gadget Guru pessimist. "We can turn that around! We can push for a radically better approach to ethics / money. We can do things completely different."
"Where did the revenue come from?" asked another netizen, in response to Fall's final thread about the project last week, wondering like we did: "how did the $130 in revenue come about??"
Are you involved with HustleGPT or the AI startup scene? Reach out: tips@futurism.com.
Of course, starting up a company that actually makes money — especially with only $100 in startup costs — is hard. And it's difficult to believe that very many investors thought that a company run by an experimental chatbot was going to be a watertight financial decision. It's easy to imagine that for most of them, it was like buying a bit of Dogecoin just for the laughs.
Still, the virality of Fall's thread compared to the dubious results does illustrate the reality-clouding hype that's sweeping the AI space right now. In fact, it can kinda reek of web3-like grifting — a type of digital scammery made possible by the frothy, unbridled, and often unfounded hype around a new piece of gold rush tech. (Fall, according to his LinkedIn, was the cofounder of a blockchain company back when that market was red hot.)
It's worth mentioning that AI bros have already been pumping up Fall's work, celebrating the Green Gadget Guru's supposed $130 in revenue and calling it one of the "most incredible things" that's been done with GPT-4 — without addressing any of its glaring and obvious issues. If that's not the hype cycle in action, we don't know what it is.
That said, though, it's probably most likely that Fall just didn't expect his tweet to take off, and now has the unenviable job of watching ChatGPT try to run a company, a task that it doesn't seem quite ready for just yet.
The reality, of course, is that AI tech still has a lot of rough edges, and its business applications are consequentially still hazy. Its work often looks good on the surface, but there are frequently deep issues under the hood that still need to be worked out.
And honestly, who knows. Maybe Green Gadget Guru will become a runaway success with ChatGPT calling the shots. Maybe the chatbot, and Fall doing its bidding, just need a little more time to get it off the ground. We'll be watching.
In the meantime, if you have any questions, you can email Green Gadget Guru at its email address, which is currently listed as contact@company.com. Best of luck.
More on HustleGPT: Man Starts Business with Only $100 by Doing What ChatGPT Tells Him
The post That Startup Run by ChatGPT Doesn't Seem to Be Doing So Great appeared first on Futurism.

Powder Cloud
An amazing video recorded by Thomas Farley, who was visiting a Utah ski resort earlier this week, shows a couple of skiers watching as a giant avalanche thunders through a snowy mountain valley — and then eventually inundates them, in a much-diminished form — in a powerful demonstration of the forces of nature at work.
"What the heck, bro," Farley exclaimed, watching the giant cloud approach at blistering speeds. "Here it comes, bro."
The video, which was taken on the outer edge of the Sundance Mountain Resort in Utah on Monday, cuts off just in time for the gigantic cloud of snow to engulf the incredulous group of skiers.
Off Property
Fortunately, despite the daunting end to the video, nobody appears to have gotten hurt.
"I checked to make sure I was not going to get hit and then got my phone out to film," Farley told Storyful. "The avalanche did not make it to the resort boundaries, but the massive powder cloud did. [It] kept us covered in a super thick cloud of snow for one or two minutes."
The resort's Twitter account later confirmed that the avalanche "happened off resort property and no one was hurt. We are very grateful for our amazing ski patrol and mountain operations crew who work tirelessly to make sure our slopes are safe."
"We remained open all day and hope you will join us tomorrow for more amazing skiing and boarding," the resort added in a follow-up.