Nature, Published online: 09 March 2023; doi:10.1038/d41586-023-00717-7Proof-of-concept mouse experiment will have a long road before use in humans is possible.
Preliminary estimates suggest that a 50-meter space rock called 2023 DW has a roughly one-in-600 chance of colliding with our planet in 23 years
- FDA Will Require Dense Breast Disclosure at Mammogram Clinics
Out of Thin Air
In an exciting turn for the field of sustainable energy research, Australian scientists have found a way to make energy out of thin air. Literally.
As detailed in a new study published this week in the journal Nature, researchers from Monash University in Melbourne, Australia discovered a new bacterial enzyme that transforms the traces of hydrogen in our atmosphere into electricity, technology that could one day be used in fuel cells that power anything from a smartwatch to even a car.
"We've known for some time that bacteria can use the trace hydrogen in the air as a source of energy to help them grow and survive, including in Antarctic soils, volcanic craters, and the deep ocean," said Professor Chris Greening, a contributor to the study, in a statement.
"But we didn't know how they did this," he added, "until now."
The enzyme, dubbed Huc, was extracted from Mycobacterium smegmati, a fairly common — and wildly resilient — soil bacterium. According to the study, it was discovered through a series of advanced molecular-mapping techniques.
"Huc is extraordinarily efficient," said Rhys Grinter, study lead and research fellow at Monash University, in the statement. "Unlike all other known enzymes and chemical catalysts, it even consumes hydrogen below atmospheric levels — as little as 0.00005 percent of the air we breathe."
The researchers used advanced microscopy techniques to first map the bacteria's internal atomic and electric structures, producing "the most resolved enzyme structure reported by this method to date," according to the statement.
Enzyme Fuel Cell
While it's unlikely to turn the sustainable energy industry on its head any time soon, the scientists say Huc is "astonishingly stable" and could one day be used as a tiny, sustainable, bacteria-powered battery for small devices.
"When you provide Huc with more concentrated hydrogen, it produces more electrical current," Grinter told LiveScience. "Which means you could use it in fuel cells to power more complex devices, like smart watches, or smartphones, more portable complex computers, and possibly even a car."
More on energy: Researchers Successfully Turn Abandoned Oil Well into Giant Geothermal Battery
The post Scientists Discover Enzyme That Can Turn Air Into Electricity appeared first on Futurism.
In one of the strongest-worded takedowns of cryptocurrency we've come across, a pair of Johns Hopkins economics experts eviscerated the industry and argued it's "part cocaine" in a scathing takedown.
In a fiery Wall Street Journal op-ed, economics professor and outspoken crypto critic Steve Hanke and Matt Sekerke, a fellow at the university's economics school, argued that while the devastating 2022 crypto crash could've technically ended up worse, the existing links between traditional banks and crypto "show how easily a cryptocurrency crisis might spill over."
"Contrary to what its marketing wizards tell us, crypto is neither money nor a vehicle for finance," they expounded. "It's an elaborate simulation of finance that produces gains and losses."
Comparing crypto to casino chips, the two experts argued that crypto is actually worse than gambling because, while people can pretty well guess the odds in a casino, "the odds in crypto are subject to gross manipulation."
And even government intervention wouldn't be able to redeem it, either.
"Regulation might stabilize the house odds and the exchange rate for chips such as stablecoins," they declared, "but it wouldn't transform crypto into finance."
Burn, Baby, Burn
In a post-FTX world, there are, according to Sanke and Sekerke, two options: regulate crypto to tamp down its free-for-all nature, or let it burn. It's not hard to see which side the economists came out on.
"Regulating crypto would encourage denser, deeper connections, generating systemic risks," the pair wrote — and as such, they think the entire industry needs to be kiboshed by the government.
The JHU economists continued their elaborate metaphor, likening crypto to ozone-depleting chemical compounds, antiquated trash bonds, and, well, the financial industry's recreational drug of choice in the 1980s.
"Crypto is part chlorofluorocarbon, part cocaine and part bearer bond," Hanke and Serkeke concluded. "It isn't the future of finance. More than malign neglect, the US needs policies that will eliminate cryptocurrencies and their metastases."
Tell us how you really feel!
More on crypto crusades: Influential Economist Tells Davos Elites That Crypto Is a Complete Scam
The post Economists Compare Crypto to "Cocaine" in Scathing Takedown appeared first on Futurism.
- In a press release on Monday, Lonestar Data Holdings announced that it had secured an additional $5 million in funding as it marches ever closer to its ambitious experiment of running data centers on the Moon.
Beyond the Clouds
One Florida-based startup wants to redefine the limits of cloud computing so drastically as to make its name obsolete — by taking it beyond our atmosphere and placing it on the lunar surface, Gizmodo reports.
In a press release on Monday, Lonestar Data Holdings announced that it had secured an additional $5 million in funding as it marches ever closer to its ambitious experiment of running data centers on the Moon.
"Data is the greatest currency created by the human race," said founder Chris Stott, in an earlier press release last year.
"We are dependent upon it for nearly everything we do and it is too important to us as a species to store in Earth's ever more fragile biosphere," he added. "Earth's largest satellite, our Moon, represents the ideal place to safely store our future."
Lonestar — which seems like a strange name for a venture based in Florida rather than Texas — successfully tested its experimental data center in December 2021 in microgravity conditions, or what's essentially zero gravity, aboard the International Space Station.
With that under its belt, plus some extra cash to boot, Lonestar's now equipped to give the data center a shot on the real thing. But it isn't exactly shipping six foot servers — that'd be too inefficient and costly.
Instead, Lonestar will try out a mere two pound data center packing a 16 terabyte capacity, Stott said in an interview with SpaceNews last year — which is an ample starting point.
The server will be brought aboard a SpaceX Falcon 9 rocket as part of the upcoming IM-2 mission from Intuitive Machines, a NASA contractor.
Intuitive Machines' IM-1 mission, which has been repeatedly delayed, is set to launch for the Moon by this June. Its exact launch date, though, isn't quite set in stone, and as a result, IM-2's exact date remains yet uncertain.
Whenever it gets there, the initial data center will feed off the mission's lander for power and communications, Statt said. Assuming the experiment is successful, Lonestar hopes to have self-sufficient data centers on the Moon by 2026.
"We believe that expanding the world's economy to encompass the Moon, which happens to be the Earth's most stable satellite, is the next whitespace in the New Space Economy," said Brad Harrison, founder and managing partner of the VC firm Scout Ventures which led the latest round of funding, in the latest release.
"Data security and storage will be a necessary part of leading the new generation of lunar exploration," he added.
More on the Moon: New Orbiter Can See Inside Moon Craters Hidden in Total Darkness
The post Florida Man Plotting to Build Web Servers on the Moon appeared first on Futurism.
|submitted by /u/Surur
|submitted by /u/bloomberg
|submitted by /u/Woke_Soul
|submitted by /u/egusa
|submitted by /u/DiamondsJims
Nature Communications, Published online: 09 March 2023; doi:10.1038/s41467-023-36913-2Rhodamines are privileged fluorescent dyes for labelling intracellular structures in living cells. Here, the authors present a facile protecting-group-free synthesis permitting generation of a wide range of symmetrical and unsymmetrical 4-carboxyrhodamines covering the whole visible spectrum.
- "Humane has partnered with Microsoft to bring its services platform to market," reads the press release, adding that "collaboration with OpenAI will integrate its technology into the Humane device and deliver OpenAI and Humane AI experiences at scale to consumers."
Humane Inc., a tech startup founded in 2018 by ex-Apple executives Bethany Bongiorno and Imran Chaudhri, just raised $100 million in its latest funding round, bringing the total of money raised to a whopping $230 million.
Which is good for them! But there's, uh, one thing: we cannot, for the life of us, figure out what these people are actually selling — and why the mystery product is groundbreaking enough for both Microsoft and OpenAI CEO Sam Altman to invest in it.
Other than the fact, of course, that — you guessed it! — AI is definitely involved, a sign that there's an exorbitant amount of interest in the space, even if a company's product amounts to nothing more than smoke and mirrors.
"Our first device will enable people to bring AI with them everywhere," Chaudhri, formerly a designer at Apple, said in a press release. "It's an exciting time, and we've been focused on how to build the platform and device that can fully harness the true power and potential of this technology."
"We are at the beginning of the next era of compute," he added, "and believe that together we can begin the journey to fundamentally reshape the role of technology in people's lives."
In true ex-Appler fashion, Humane's website provides few clues as to what it might have been up to for the past five years. Their homepage claims that it's "Building the First AI Hardware and Services Platform," and the word "cloud" crops up more than once.
But, like Chaudri's statement above, nothing on the website indicates the company has anything in terms of an actual product, or at least right now.
"We believe in building innovative technology that feels familiar, natural, and human," reads the company's mission statement, adding that they aim to create technology "that improves the human experience and is born from good intentions" — which arguably doesn't mean much of anything.
There are a few slightly-more concrete clues, most notably in the form of published patents. "Air and touch gestures can also be performed on a projected ephemeral display," reads one such patent, according to a report from The Wall Street Journal, "for example, responding to user interface element."
But patents are, well, just patents, and often have only a tenuous connection with the real world.
The Verge also noted this week that a leaked investor pitch deck from 2021 suggests that Humane's product is some kind of wearable camera that "captures moments you didn't think to capture." But The Verge hasn't been able to independently verify that the "leaked" deck is legitimate, so definitely take that with a grain of salt.
So, to recap: whatever it is, it might be a wearable, and you can maybe wave at it and touch it. Also, AI is also somehow involved, and both Microsoft and OpenAI are happy to be a part of it.
"Humane has partnered with Microsoft to bring its services platform to market," reads the press release, adding that "collaboration with OpenAI will integrate its technology into the Humane device and deliver OpenAI and Humane AI experiences at scale to consumers."
To be fair, a lot of tech companies screw themselves over by publicly hyping up products that can't actually be brought to market, so we don't necessarily blame Humane for wanting to keep its device, which the WSJ says should be out this spring, under wraps.
Still, after "five years of nothing," as Apple Insider very eloquently put it, Humane's buzzword-happy — but as of yet substance-less — run-up to their initial product launch sits pretty precariously on the line between secrecy and smoke.
One thing's for sure, though: the AI gold is still rushing.
READ MORE: Startup by Ex-Apple Executives Raises $100 Million, Partners With OpenAI, Microsoft [The Wall Street Journal]
More on AI: Facebook's Powerful ChatGPT Competitor Leaks on 4Chan
The post Company Raises $100M After Announcing Shift to AI, But Has No Discernible Product appeared first on Futurism.
Hindawi, the open access publisher that Wiley acquired in 2021, temporarily suspended publishing special issues because of "compromised articles," according to a press release announcing the company's third quarter financial results.
Brian Napack, Wiley's president and CEO, specifically noted the "unplanned publishing pause at Hindawi" as a factor that "challenged" the company this year.
The pause began in mid-October and ended in mid-January, a Wiley spokesperson told us.
In Wiley's third quarter that ended Jan. 31, 2023, the suspension cost Hindawi – whose business model is based on charging authors to publish – $9 million in lost revenue compared to the third quarter of 2022. The company cited the pause as the primary reason its revenue from its research segment "was down 4% as reported, or down 2% at constant currency and excluding acquisitions," the press release stated.
At the time of this writing, Wiley's stock was trading 16% lower than it opened in the morning, and reached a new low for the past year.
The announcement follows scrutiny from sleuths, and the publisher retracting hundreds of papers for manipulated peer review last September, after Hindawi's research integrity team began investigating a single special issue.
The notorious paper with capital Ts as error bars was also published in a special issue of a Hindawi journal before it was retracted in December.
In another episode we reported on last month, a professor used the email account of a former student to conduct all the correspondence needed to edit special issues of two Hindawi journals.
The nearly 300 articles in the two special issues were "mostly meaningless gobbledegook" that suggested they came from a paper mill, according to sleuth Dorothy Bishop, who recently published a preprint identifying signs of paper mill activity in Hindawi special issues.
In response to our query about the timing of the pause, a Wiley spokesperson told us:
Wiley has taken significant measures to address research integrity challenges after identifying misconduct in the external peer review process in Hindawi's special issues, which are topic-specific issues of our journals. We moved to pause publication of special issues, increase controls and specialist staffing on all articles in progress in special issues, and introduce AI-based screening tools into the process … To counteract integrity issues in the future, we have added additional checks on guest editors and special issues.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that's not in our database, you can let us know here. For comments or feedback, email us at firstname.lastname@example.org.
This is an edition of Up for Debate, a newsletter by Conor Friedersdorf. On Wednesdays, he rounds up timely conversations and solicits reader responses to one thought-provoking question. Later, he publishes some thoughtful replies. Sign up for the newsletter here.
Question of the Week
How have cars shaped your life, and/or what do you think about their future? (I'm eager to hear anything from attacks on the automobile to defenses of the great American road trip to eagerness for driverless electric cars to laments that the kids these days don't learn how to drive when they turn 16, let alone how to drive a stick shift. Do you hate your commute? Do you like toll roads? Do you love your Harley-Davidson? Do you regard the replacement of tactile stereo interfaces with touch screens as a scourge? If you want, you can even send me a paean to the rotary engine, if it's well written.) As always, while you are opining on anything related to cars or trucks or even parking spaces or meters, I especially encourage stories and reflections rooted in personal experience.
Send your responses to email@example.com or simply reply to this email.
Conversations of Note
The New Anarchy
In an article about political violence in America, my colleague Adrienne LaFrance takes a detour to Italy to reflect on how a country that suffers an outbreak of domestic terrorism can regain stability:
On Saturday, August 2, 1980, a bomb hidden inside a suitcase blew up at the Bologna Centrale railway station, killing 85 people … the deadliest attack in Italy since World War II. By the time it occurred, Italians were more than a decade into a period of intense political violence, one that came to be known as Anni di Piombo, or the "Years of Lead." From roughly 1969 to 1988, Italians experienced open warfare in the streets, bombings of trains, deadly shootings and arson attacks, at least 60 high-profile assassinations, and a narrowly averted neofascist coup attempt. It was a generation of death and bedlam. Although exact numbers are difficult to come by, during the Years of Lead, at least 400 people were killed and some 2,000 wounded in more than 14,000 separate attacks.
As I sat at the Bologna Centrale railway station in September, a place where so many people had died, I found myself thinking, somewhat counterintuitively, about how, in the great sweep of history, the political violence in Italy in the 1970s and '80s now seems but a blip. Things were so terrible for so long. And then they weren't. How does political violence come to an end? No one can say precisely what alchemy of experience, temperament, and circumstance leads a person to choose political violence. But being part of a group alters a person's moral calculations and sense of identity, not always for the good. Martin Luther King Jr., citing the theologian Reinhold Niebuhr, wrote in his "Letter From Birmingham Jail" that "groups tend to be more immoral than individuals." People commit acts together that they'd never contemplate alone.
Vicky Franzinetti was a teenage member of the far-left militant group Lotta Continua during the Years of Lead. "There was a lot of what I would call John Wayneism, and a lot of people fell for that," she told me. "Whether it's the Black Panthers or the people who attacked on January 6 on Capitol Hill, violence has a mesmerizing appeal on a lot of people." A subtle but important shift also took place in Italian political culture during the '60s and '70s as people grasped for group identity. "If you move from what you want to who you are, there is very little scope for real dialogue, and for the possibility of exchanging ideas, which is the basis of politics," Franzinetti said. "The result is the death of politics, which is what has happened."
Talking with Italians who lived through the Years of Lead about what brought this period to an end, two common themes emerged, LaFrance argues:
The first has to do with economics. For a while, violence was seen as permissible because for too many people, it felt like the only option left in a world that had turned against them. When the Years of Lead began, Italy was still fumbling for a postwar identity. Some Fascists remained in positions of power, and authoritarian regimes controlled several of the country's neighbors—Greece, Portugal, Spain, Turkey. Not unlike the labor movements that arose in Galleani's day, the Years of Lead were preceded by intensifying unrest among factory workers and students, who wanted better social and working conditions. The unrest eventually tipped into violence, which spiraled out of control. Leftists fought for the proletariat, and neofascists fought to wind back the clock to the days of Mussolini. When, after two decades, the economy improved in Italy, terrorism receded.
The second theme was that the public finally got fed up. People didn't want to live in terror. They said, in effect: Enough. Lotta Continua hadn't resorted to violence in the early years. When it did grow violent, it alienated its own members. "I didn't like it, and I fought it," Franzinetti told me. Simonetta Falasca-Zamponi, a sociology professor at UC Santa Barbara who lived in Rome at the time, recalled: "It went too far. Really, it reached a point that was quite dramatic. It was hard to live through those times." But it took a surprisingly long while to reach that point. The violence crept in—one episode, then another, then another—and people absorbed and compartmentalized the individual events, as many Americans do now. They did not understand just how dangerous things were getting until violence was endemic. "It started out with the kneecappings," Joseph LaPalombara, a Yale political scientist who lived in Rome during the Years of Lead, told me, "and then got worse. And as it got worse, the streets emptied after dark."
A turning point in public sentiment, or at least the start of a turning point, came in the spring of 1978, when the leftist group known as the Red Brigades kidnapped the former prime minister and leader of the Christian Democrats Aldo Moro, killing all five members of his police escort and turning him into an example of how We don't negotiate with terrorists can go terrifically wrong. Moro was held captive and tortured for 54 days, then executed, his body left in the back of a bright-red Renault on a busy Rome street … It shouldn't take an act like the assassination of a former prime minister to shake people into awareness. But it often does. William Bernstein, the author of The Delusions of Crowds, is not optimistic that anything else will work: "The answer is—and it's not going to be a pleasant answer—the answer is that the violence ends if it boils over into a containable cataclysm."
The rest of the article is similarly thought-provoking.
Good News for Low-Wage Workers
Also at The Atlantic, Annie Lowrey argues that we're in the midst of a significant socioeconomic shift:
After a brutal few decades in which low-wage jobs proliferated and the American middle class hollowed out, the working poor have started earning more—a lot more. Many low-wage jobs have become middle-wage jobs. And incomes are increasing faster for poorer workers than for wealthier ones, a dynamic known as wage compression.
As a result, millions of low-income families are experiencing less financial stress and even a modicum of comfort, though the country's surging rents and rising pace of inflation are burdening them too. The yawning gaps between different groups of American workers—Black and white, young and old, those without a college degree and those with one—have stopped widening and started narrowing. Measures of poverty and income inequality are dropping. I hesitate to call this the "Great Compression," given that earnings disparities remain a dominant feature of the American labor market and American life. (Plus, economists already use that term to refer to the middle of the 20th century.) But it really is a remarkable trend, a half-decade-old "Little Compression" that policy makers should do everything in their power to extend, expand, and turn great.
What's needed next is enough new construction of houses, condos, and apartment buildings to bring costs down. All we have to do is stop preventing real-estate developers from erecting them.
A Lonely Generation
After endorsing Jonathan Haidt and Jean M. Twenge's thesis that smartphones and social media are among the most significant factors making young people today more anxious and depressed than bygone generations, Freddie deBoer speculates about how the cause and effect might work: When he was young, "the constant adolescent itch to be with other people, to see and be seen, could only be fulfilled by being in the physical presence of others," and when cell phones and social-media sites "presented the opportunity to connect with people whenever you wanted," what at first seemed liberatory and world expanding was actually a powerful trap:
This form of interaction superficially satisfied the drive to connect with other people, but that connection was shallow, immaterial, unsatisfying. The human impulse to see other people was dulled without accessing the reinvigorating power of actual human connection.
Being social is scary. Sometimes you ask someone to hang out and they don't want to; sometimes you ask someone for their phone number and they don't give it to you. Precisely because connection is so important to us, rejection of intimacy is uniquely painful. Our constant task as human beings is to overcome the fear of that rejection so that we can connect. I would nominate this dynamic as one of the great human dramas, a core element of being alive. The danger of constant digital connectivity is that it cons us into thinking that we can have the connection without the risk, that we can enjoy a simulacra of fulfilling human interaction without ever leaving the safety of online quasi-reality.
And so no wonder kids spend less time with friends, have less sex, feel no need to get their driver's licenses … They've been raised in an environment where massive corporations spend billions of dollars to convince them that they never have to leave their digital "ecosystems." But only human connection is human connection. There is no substitute for IRL. And I think our adolescents are bearing the brunt of a vast social experiment where we tried to substitute something else for face-to-face interaction, and found it didn't work.
Provocation of the Week
At Blackbird Spyplane, a Substack unlike any other, the journalist Jonah Weiner and the design scout Erin Wylie argue that sometimes, that a food or paint stain on your shirt is a good thing:
Don't think of stains as "stains," think of them as "patina" — that is, natural, inadvertent, beauty-deepening decorations. Paint is the ur-example of a sick, "inadvertently decorative" stain. Paint on your shoes, paint on your pants, paint on a sweatshirt — f**k it, paint on a chunky knit sweater: You get a little paint on pretty much anything and 9 times out of 10 you've made yourself look cooler. Sometimes, of course, paint can read as "cool" to the point of parody / "get a load of Jasper Johns over here" cosplay. But all things being equal, paint communicates two swag-compounding things about you at once:
- You've been in the lab getting some fly s**t done (whether it's whipping out these still-lifes or "rolling up your sleeves" on some honest-labor house-painting type s**t), and
- You aren't overly precious about your presentation. We've written here about how flambéeing and pan-searing a jawn in this exact spirit is a great way to assert ownership over, e.g., a hyped pair of sneakers you love but don't feel quite yourself in when they're box fresh.
This is why all kinds of fashion designers—Margiela, Junya and Visvim leap to mind—sell signature pre-paint-splattered pieces. As with pre-distressed denim, such clothes tend to strike me & Erin as palpably fugazi and unrockably "extra" (it's wild how well the eye can tell the difference between paint splatter actually incurred in the line of duty and artful facsimiles!!) but that only buttresses the underlying case for paint's power.
This also helps us understand, by extension, why wine and tomato-sauce stains can also read as mad chill and cool. As with paint, these kinds of stains communicate un-preciousness on behalf of the wearer while simultaneously indicating that you have been busy doing fun, interesting s**t: imbuing clothes with stories and putting them to your own JOIE DE VIVRED-out purposes, rather than "letting them wear you."
These stains conjure up an ambiance of romance, where your clothes serve as a visual index of an INVIGORATED LIFE. You'd have to be a fusty buzzkill to deny that that's tight!!
Here's where things start to get murky, though, because a major part of what's going on here is that wine and arrabbiata sauce tend to code as just the right patina-boosting degree of, like, "Continental" and "refined." The implicit message is that you probably dropped some $$$ in the process of accumulating those stains, and you did so in "good taste." This is why, even though you have literally spilled food on yourself, the wine or tomato-sauce stain in question does not communicate sloppiness the way, say, a mustard stain does.
What follows is a meditation on "good" versus "bad" stains.
Thanks for your contributions. I read every one that you send. By submitting an email, you've agreed to let us use it—in part or in full—in the newsletter and on our website. Published feedback may include a writer's full name, city, and state, unless otherwise requested in your initial note.
, the internet-famous AI text generator, has taken on a new form. Once a website you could visit, it is now a service that you can integrate into software of all kinds, from spreadsheet programs to delivery apps to magazine websites such as this one. Snapchat added ChatGPT to its chat service (it suggested that users might type "Can you write me a haiku about my cheese-obsessed friend Lukas?"), and Instacart plans to add a recipe robot. Many more will follow.
They will be weirder than you might think. Instead of one big AI chat app that delivers knowledge or cheese poetry, the ChatGPT service (and others like it) will become an AI confetti bomb that sticks to everything. AI text in your grocery app. AI text in your workplace-compliance courseware. AI text in your HVAC how-to guide. AI text everywhere—even later in this article—thanks to an API.
API is one of those three-letter acronyms that computer people throw around. It stands for "application programming interface": It allows software applications to talk to one another. That's useful because software often needs to make use of the functionality from other software. An API is like a delivery service that ferries messages between one computer and another.
Despite its name, ChatGPT isn't really a chat service—that's just the experience that has become most familiar, thanks to the chatbot's pop-cultural success. "It's got chat in the name, but it's really a much more controllable model," Greg Brockman, OpenAI's co-founder and president, told me. He said the chat interface offered the company and its users a way to ease into the habit of asking computers to solve problems, and a way to develop a sense of how to solicit better answers to those problems through iteration.
But chat is laborious to use and eerie to engage with. "You don't want to spend your time talking to a robot," Brockman said. He sees it as "the tip of an iceberg" of possible future uses: a "general-purpose language system." That means ChatGPT as a service (rather than a website) may mature into a system of plumbing for creating and inserting text into things that have text in them.
As a writer for a magazine that's definitely in the business of creating and inserting text, I wanted to explore how The Atlantic might use the ChatGPT API, and to demonstrate how it might look in context. The first and most obvious idea was to create some kind of chat interface for accessing magazine stories. Talk to The Atlantic, get content. So I started testing some ideas on ChatGPT (the website) to explore how we might integrate ChatGPT (the API). One idea: a simple search engine that would surface Atlantic stories about a requested topic.
But when I started testing out that idea, things quickly went awry. I asked ChatGPT to "find me a story in The Atlantic about tacos," and it obliged, offering a story by my colleague Amanda Mull, "The Enduring Appeal of Tacos," along with a link and a summary (it began: "In this article, writer Amanda Mull explores the cultural significance of tacos and why they continue to be a beloved food."). The only problem: That story doesn't exist. The URL looked plausible but went nowhere, because Mull had never written the story. When I called the AI on its error, ChatGPT apologized and offered a substitute story, "Why Are American Kids So Obsessed With Tacos?"—which is also completely made up. Yikes.
How can anyone expect to trust AI enough to deploy it in an automated way? According to Brockman, organizations like ours will need to build a track record with systems like ChatGPT before we'll feel comfortable using them for real. Brockman told me that his staff at OpenAI spends a lot of time "red teaming" their systems, a term from cybersecurity and intelligence that names the process of playing an adversary to discover vulnerabilities.
Brockman contends that safety and controllability will improve over time, but he encourages potential users of the ChatGPT API to act as their own red teamers—to test potential risks—before they deploy it. "You really want to start small," he told me.
Fair enough. If chat isn't a necessary component of ChatGPT, then perhaps a smaller, more surgical example could illustrate the kinds of uses the public can expect to see. One possibility: A magazine such as ours could customize our copy to respond to reader behavior or change information on a page, automatically.
Working with The Atlantic's product and technology team, I whipped up a simple test along those lines. On the back end, where you can't see the machinery working, our software asks the ChatGPT API to write an explanation of "API" in fewer than 30 words so a layperson can understand it, incorporating an example headline of the most popular story on The Atlantic's website at the time you load the page. That request produces a result that reads like this:
As I write this paragraph, I don't know what the previous one says. It's entirely generated by the ChatGPT API—I have no control over what it writes. I'm simply hoping, based on the many tests that I did for this type of query, that I can trust the system to produce explanatory copy that doesn't put the magazine's reputation at risk because ChatGPT goes rogue. The API could absorb a headline about a grave topic and use it in a disrespectful way, for example.
In some of my tests, ChatGPT's responses were coherent, incorporating ideas nimbly. In others, they were hackneyed or incoherent. There's no telling which variety will appear above. If you refresh the page a few times, you'll see what I mean. Because ChatGPT often produces different text from the same input, a reader who loads this page just after you did is likely to get a different version of the text than you see now.
Media outlets have been generating bot-written stories that present sports scores, earthquake reports, and other predictable data for years. But now it's possible to generate text on any topic, because large language models such as ChatGPT's have read the whole internet. Some applications of that idea will appear in new kinds of word processors, which can generate fixed text for later publication as ordinary content. But live writing that changes from moment to moment, as in the experiment I carried out on this page, is also possible. A publication might want to tune its prose in response to current events, user profiles, or other factors; the entire consumer-content internet is driven by appeals to personalization and vanity, and the content industry is desperate for competitive advantage. But other use cases are possible, too: prose that automatically updates as a current event plays out, for example.
Though simple, our example reveals an important and terrifying fact about what's now possible with generative, textual AI: You can no longer assume that any of the words you see were created by a human being. You can't know if what you read was written intentionally, nor can you know if it was crafted to deceive or mislead you. ChatGPT may have given you the impression that AI text has to come from a chatbot, but in fact, it can be created invisibly and presented to you in place of, or intermixed with, human-authored language.
Carrying out this sort of activity isn't as easy as typing into a word processor—yet—but it's already simple enough that The Atlantic product and technology team was able to get it working in a day or so. Over time, it will become even simpler. (It took far longer for me, a human, to write and edit the rest of the story, ponder the moral and reputational considerations of actually publishing it, and vet the system with editorial, legal, and IT.)
That circumstance casts a shadow on Greg Brockman's advice to "start small." It's good but insufficient guidance. Brockman told me that most businesses' interests are aligned with such care and risk management, and that's certainly true of an organization like The Atlantic. But nothing is stopping bad actors (or lazy ones, or those motivated by a perceived AI gold rush) from rolling out apps, websites, or other software systems that create and publish generated text in massive quantities, tuned to the moment in time when the generation took place or the individual to which it is targeted. Brockman said that regulation is a necessary part of AI's future, but AI is happening now, and government intervention won't come immediately, if ever. Yogurt is probably more regulated than AI text will ever be.
Some organizations may deploy generative AI even if it provides no real benefit to anyone, merely to attempt to stay current, or to compete in a perceived AI arms race. As I've written before, that demand will create new work for everyone, because people previously satisfied to write software or articles will now need to devote time to red-teaming generative-content widgets, monitoring software logs for problems, running interference with legal departments, or all other manner of tasks not previously imaginable because words were just words instead of machines that create them.
Brockman told me that OpenAI is working to amplify the benefits of AI while minimizing its harms. But some of its harms might be structural rather than topical. Writing in these pages earlier this week, Matthew Kirschenbaum predicted a textpocalypse, an unthinkable deluge of generative copy "where machine-written language becomes the norm and human-written prose the exception." It's a lurid idea, but it misses a few things. For one, an API costs money to use—fractions of a penny for small queries such as the simple one in this article, but all those fractions add up. More important, the internet has allowed humankind to publish a massive deluge of text on websites and apps and social-media services over the past quarter century—the very same content ChatGPT slurped up to drive its model. The textpocalypse has already happened.
Just as likely, the quantity of generated language may become less important than the uncertain status of any single chunk of text. Just as human sentiments online, severed from the contexts of their authorship, take on ambiguous or polyvalent meaning, so every sentence and every paragraph will soon arrive with a throb of uncertainty: an implicit, existential question about the nature of its authorship. Eventually, that throb may become a dull hum, and then a familiar silence. Readers will shrug: It's just how things are now.
Even as those fears grip me, so does hope—or intrigue, at least—for an opportunity to compose in an entirely new way. I am not ready to give up on writing, nor do I expect I will have to anytime soon—or ever. But I am seduced by the prospect of launching a handful, or a hundred, little computer writers inside my work. Instead of (just) putting one word after another, the ChatGPT API and its kin make it possible to spawn little gremlins in my prose, which labor in my absence, leaving novel textual remnants behind long after I have left the page. Let's see what they can do.
In recent memory, a conversation about Elon Musk might have had two fairly balanced sides. There were the partisans of Visionary Elon, head of Tesla and SpaceX, a selfless billionaire who was putting his money toward what he believed would save the world. And there were critics of Egregious Elon, the unrepentant troll who spent a substantial amount of his time goading online hordes. These personas existed in a strange harmony, displays of brilliance balancing out bursts of terribleness. But since Musk's acquisition of Twitter, Egregious Elon has been ascendant, so much so that the argument for Visionary Elon is harder to make every day.
Take, just this week, a back-and-forth on Twitter, which, as is usually the case, escalated quickly. A Twitter employee named Haraldur Thorleifsson tweeted at Musk to ask whether he was still employed, given that his computer access had been cut off. Musk—who has overseen a forced exodus of Twitter employees—asked Thorleifsson what he's been doing at Twitter. Thorleifsson replied with a list of bullet points. Musk then accused him of lying and in a reply to another user, snarked that Thorleifsson "did no actual work, claimed as his excuse that he had a disability that prevented him from typing, yet was simultaneously tweeting up a storm." Musk added: "Can't say I have a lot of respect for that." Egregious Elon was in full control.
By the end of the day, Musk had backtracked. He'd spoken with Thorleifsson, he said, and apologized "for my misunderstanding of his situation." Thorleifsson isn't fired at all, and, Musk said, is considering staying on at Twitter. (Twitter did not respond to a request for comment, nor did Thorleifsson, who has not indicated whether he would indeed stay on.)
The exchange was surreal in several ways. Yes, Musk has accrued a list of offensive tweets the length of a CVS receipt, and we could have a very depressing conversation about which cruel insult or hateful shitpost has been the most egregious. Still, this—mocking a worker with a disability—felt like a new low, a very public demonstration of Musk's capacity to keep finding ways to get worse. The apology was itself surprising; Musk rarely shows remorse for being rude online. But perhaps the most surreal part was Musk's personal conclusion about the whole situation: "Better to talk to people than communicate via tweet."
[R]ead: Twitter's slow and painful end
This is quite the takeaway from the owner of Twitter, the man who paid $44 billion to become CEO, an executive who is rabidly focused on how much other people are tweeting on his social platform, and who was reportedly so irked that his own tweets weren't garnering the engagement numbers he wanted that he made engineers change the algorithm in his favor. (Musk has disputed this.) The conclusion of the Thorleifsson affair seems to betray a lack of conviction, a slip in the confidence that made Visionary Elon so compelling. It is difficult to imagine such an equivocation elsewhere in the Musk Cinematic Universe, where Musk seems more at ease, more in control, with the particularities of his grand visions. In leading an electric-car company and a space company, Musk has expressed, and stuck with, clear goals and purposes for his project: make an electric car people actually want to drive; become a multiplanetary species. When he acquired Twitter, he articulated a vision for making the social network a platform for free speech. But in practice, the self-described Chief Twit had gotten dragged into—and has now articulated—the thing that many people understand to be true about Twitter, and social media at large: that, far from providing a space for full human expression, it can make you a worse version of yourself, bringing out your most dreadful impulses.
We can't blame all of Musk's behavior on social media: Visionary Elon has always relied on his darker self to achieve his largest goals. Musk isn't known for being the most understanding boss, at any of his companies. He's called in SpaceX workers on Thanksgiving to work on rocket engines. He's said that Tesla employees who want to work remotely should "pretend to work somewhere else." At Twitter, Musk expects employees to be "extremely hardcore" and work "long hours at high intensity," a directive that former employees have claimed, in a class-action lawsuit, has resulted in workers with disabilities being fired or forced to resign. (Twitter quickly sought to dismiss the claim.) Musk's interpretation of worker accommodation is converting conference rooms into bedrooms so that employees can sleep at the office.
In the past, though, the two aspects of Elon aligned enough to produce genuinely admirable results. He has led the development of a hugely popular electric car and produced the only launch system capable of transporting astronauts into orbit from U.S. soil. Even as SpaceX tried to force out residents from the small Texas town where it develops its most ambitious rockets, it converted some locals into Elon fans. SpaceX hopes to attempt the first launch of its newest, biggest rocket there "sometime in the next month or so," Musk said this week. That launch vehicle, known as Starship, is meant for missions to the moon and Mars, and it is a key part of NASA's own plans to return American astronauts to the lunar surface for the first time in more than 50 years.
[Read: Elon Musk, baloney king]
Through all this, he tweeted. Only now, though, is his online persona so alienating people that more of his fans and employees are starting to object. Last summer, a group of SpaceX employees wrote an open letter to company leadership about Musk's Twitter presence, writing that "Elon's behavior in the public sphere is a frequent source of distraction and embarrassment for us"; SpaceX responded by firing several of the letter's organizers. By being so focused on Twitter—a place with many digital incentives, very few of which involve being thoughtful and generous—Musk seems to be ceding ground to the part of his persona that glories in trollish behavior. On Twitter, Egregious Elon is rewarded with engagement, "impressions." Being reactionary comes with its rewards. The idea that someone is "getting worse" on Twitter is a common one, and Musk has shown us a master class of that downward trajectory in the past year. (SpaceX, it's worth noting, prides itself on having a "no-asshole policy.")
Does Visionary Elon have a chance of regaining the upper hand? Sure. An apology helps, along with the admission that maybe tweeting in a contextless void is not the most effective way to interact with another person. Another idea: Stop tweeting. Plenty of people have, after realizing—with the clarity of the protagonist of The Good Place, a TV show about being in hell—that this is the bad place, or at least a bad place for them. For Musk, though, to disengage from Twitter would now come at a very high cost. It's also unlikely, given how frequently he tweets. And so, he stays. He engages and, sometimes, rappels down, exploring ever-darker corners of the hole he's dug for himself.
On Tuesday, Musk spoke at a conference held by Morgan Stanley about his vision for Twitter. "Fundamentally it's a place you go to to learn what's going on and get the real story," he said. This was in the hours before Musk retracted his accusations against Thorleifsson, and presumably learned "the real story"—off Twitter. His original offending tweet now bears a community note, the Twitter feature that allows users to add context to what may be false or misleading posts. The social platform should be "the truth, the whole truth—and I'd like to say nothing but the truth," Musk said. "But that's hard. It's gonna be a lot of BS." Indeed.
- Nine months before the Japanese attack on Pearl Harbor, President Franklin Roosevelt signed into law An Act to Promote the Defense of the United States, better known as the Lend-Lease Act.
Lockheed Martin builds its advanced mobile rocket launchers in a converted diaper factory, of all places. When I visited the plant in southern Arkansas at the end of February, I found it humming with activity. The factory and its workers are a key component of America's arsenal of democracy. The dollars the Biden administration is spending to provide abundant military aid for Ukraine are creating jobs here, and in other industrial towns throughout the United States. But watching the workers on the assembly line also underscored the extent of the challenge ahead. After decades of atrophy and neglect, America's defense industries are struggling to meet the sudden surge in demand.
[Elliot Ackerman: The war in Ukraine has exposed a critical American vulnerability]
I found Becky Withrow, Lockheed's director of business development, standing on the factory floor, 90 minutes south of Little Rock, in East Camden. "We had to hang a curtain across the back wall for the opening-day ceremony in 2017," she says wryly. "There were still a few places we hadn't cleaned up yet." It's a far cry from the famed Ford factory at Willow Run, the mile-long assembly line that cranked out B-24 Liberator bombers during the Second World War, with a new plane rolling out every hour at the peak of production. But it's at factories like this one where the war in Ukraine, and conflicts to come, may be lost or won.
Dozens of welders and assemblers work the production line behind Withrow. They crawl over mobile rocket launchers in various stages of assembly, the parts laid out like so many toy-model kits. The launchers come in two variants: the tracked M270, and the newer High Mobility Artillery Rocket System, or HIMARS, which is wheeled. The M270 program is a public-private partnership, in which Lockheed refurbishes older models stored at Red River Army Depot in northeastern Texas so they can be shipped to our allies, whereas the HIMARS are built from the ground up in Lockheed's Camden facility.
It wasn't the war in Ukraine, or even an American purchase order, that first reinvigorated the HIMARS program. By 2013, Lockheed had stopped manufacturing HIMARS altogether, but an order by the United Arab Emirates for 12 launchers in 2017 led the company to open the current facility. It hasn't closed since, and demand has only grown. To date, NATO has sent Ukraine at least 20 HIMARS and 10 M270s, with more to follow. Of the $67.1 billion appropriated by Congress last year to arm Ukraine, $631 million was awarded to Lockheed Martin for the construction of new HIMARS.
Along with Withrow, I'm guided on my tour by Dennis Truelove, a 40-year veteran at Lockheed. He's worked on the M270 program for decades, and today he's retiring. "I like to call myself 'Redeployment,'" he says, as he speaks about the M270, which is a recapitalization of old systems. "Also, I'm a bit of a hoarder." He gestures to the old rocket launchers awaiting refurbishment. More than one Lockheed employee tells me of the pride they feel when they see a HIMARS or an M270 launching rockets at Russian targets on the news. That pride extends past the battlefield. Many of the Lockheed assemblers—who are decades younger than Truelove—wear T-shirts that proclaim Coolest Thing Made in Arkansas, 2022 Winner: HIMARS. This, I'm told, was a great coup; Cheetos are also made in Arkansas.
Currently, the Camden facility produces 48 refurbished M270s and 48 new HIMARS each year. The HIMARS numbers are set to expand, doubling to 96 by the third quarter of 2025—two and a half years after a new contract was awarded. Although certain steps are automated, production remains manpower-intensive. One part on the HIMARS chassis requires an assembler to drill 1,300 precision holes by hand. Increasing the rate of production isn't as simple as flipping a switch.
The potential for expanded capacity is certainly there. Right now, Lockheed employees work a single shift four days a week. To meet increased demand, management plans to add additional shifts and hire another 200 employees over the next five years. Lockheed's Camden facility, which sits on 2,427 acres, has significant potential for growth. "Camden has unlimited production capacity," Truelove tells me as we walk around the factory, adding that the facility is "a strategic resource for the U.S. government." It is located on the larger Highland Industrial Park, whose 18,500 acres were originally the Shumaker Ammunition Depot, built during the Second World War to manufacture and store torpedoes, bombs, and other munitions.
[From the March 2023 issue: The real obstacle to nuclear power]
The Navy selected East Camden in the Second World War to produce and store large quantities of munitions because of its remote location, and it remains remote today. On my way in from the airport the night before, I had to drive 30 minutes to find somewhere to grab a bite to eat, settling on some snacks at a "Boots & Liquor" store off the highway. The non-defense-related economy around East Camden has remained slow to develop, but executives at Lockheed see that changing. In the past four years, their workforce in Camden has doubled to more than 1,000 employees. Highland Industrial Park also counts General Dynamics, Raytheon, and Aerojet Rocketdyne as tenants.
Those who criticize the $67.1 billion approved by Congress for Ukraine argue that this money would be better spent on domestic investment. That critique, however, supposes that these congressional appropriations are akin to direct cash transfers to the Ukrainian government, which they are not. This is money that goes back into the American economy. And military aid to Ukraine is allowing America to rebuild its arsenal of democracy.
There's historical precedent for this. Nine months before the Japanese attack on Pearl Harbor, President Franklin Roosevelt signed into law An Act to Promote the Defense of the United States, better known as the Lend-Lease Act. Lend-Lease reversed neutrality acts passed by an isolationist Congress in 1935, 1937, and 1939. By inhibiting the United States' ability to arm its allies, these neutrality acts stunted the growth of America's manufacturing base while the Axis powers invested in theirs. The passage of Lend-Lease reinvigorated America's defense manufacturing industry. Over the course of the Second World War, Lend-Lease would account for 17 percent of U.S. defense expenditure, a total of $719 billion in today's dollars, that armed our allies, including Britain, France, the Soviet Union, and China.
Although U.S. defense-production rates have never declined back to interwar levels, there is growing bipartisan consensus that the U.S. must reinvest in its manufacturing capacity. The CHIPS and Science Act, passed last August, provides $280 billion in funding to boost semiconductor manufacturing and research in the United States. Today, more than 90 percent of the advanced chips and semiconductors used in defense are manufactured in Taiwan. Given the possibility of a Chinese invasion, this is an unacceptable national-security risk.
The war in Ukraine requires a different type of reinvestment. It is a hungry war, devouring resources at a rate not seen in decades. On an average day in Ukraine, the two sides lob approximately 30,000 artillery shells at each other. This has created a munitions shortage for both NATO and Russia. The war's pace has also strained supplies of the rockets fired from both the M270 and HIMARS. Those rockets, known as the guided multiple-launch rocket system, or GMLRS, are manufactured in a separate facility in Camden.
On the drive out to the GMLRS factory, we pass hundreds of black cylindrical railway cars parked along tracks that terminate in the Highland Industrial Park. These railway cars transport many of the raw energetic chemicals Lockheed uses to build its rocket boosters and warheads. The manufacture of GMLRS requires significantly more automation than the production of HIMARS and the M270. The Camden factory is the only one in the world that produces the GMLRS, a munition relied upon by half a dozen allied nations.
On the factory floor, a Jervis Webb conveyor system rattles overhead. In its clutches is a 200-pound warhead. Due to security concerns, no photographs are allowed inside the facility, but at successive stages of assembly, we're able to witness the accelerant being placed into the rocket's outer shell, the warhead being fixed onto the rocket's end, and the series of electrical tests conducted on each pod of six rockets for quality control. Noticeably, on this side of the Lockheed complex there are more women than men at work, performing tasks that require great precision and dexterity.
[Eliot A. Cohen: Western aid to Ukraine is still not enough]
A control room with a dozen screens sits in the center of the plant. A supervisor monitors video feeds from each assembly station. He tracks every rocket as it proceeds down the line. On one screen, he has a pacing chart, showing how long each intermediate step should take. In the center of this screen is a large 52 on a red background. At the end of the day, if the team hits that goal, the number turns green. Jay Price, Lockheed's vice president for missile and fire control, tells me that last year they built 7,500 GMLRS. This year, that number will increase to 10,000. He says that this facility has "capacity to go beyond that, if needed."
As Price and I leave the factory, I start doing some back-of-the-envelope math as to what maximum rocket production might look like if, say, the war in Ukraine increased in intensity or if China moved on Taiwan. Fifty-two rockets a day multiplied by 365 is 18,980 GMLRS units each year. Sure, maybe, Price says. Obviously, this would require more shifts. Possibly, Price adds. Could that number edge higher if the daily rate of production increased past 52? Although Price acknowledges that the team at Lockheed could surge production numbers, if needed, he explains that it wouldn't be simple.
"Do you know how many parts it takes to build a rocket?" he asks me, glancing back at the factory.
I confess that I have no idea.
"All of them," he says.
Nature, Published online: 09 March 2023; doi:10.1038/d41586-023-00693-yWell-schooled bees' performances convey where to find food sources, but uneducated insects' dances mislead.
-19 has taken a relatively limited toll on the mental health of most people around the globe, according to a new study.
Despite the dramatic stories to the contrary, where changes in mental health symptoms were identified compared to before the pandemic, these changes were minimal for the most part, the researchers say.
This held true whether the studies covered the mental health of the population as a whole or that of specific groups (e.g., people of particular ages, sex or gender, or with pre-existing medical or mental health conditions).
For the study in BMJ, the researchers reviewed data from 137 studies in various languages involving 134 cohorts of people from around the world. Most of the studies were from high or middle-income countries, and about 75% of participants were adults and 25% were children and adolescents between the ages of 10-19.
"Mental health in COVID-19 is much more nuanced than people have made it out to be," says senior author Brett Thombs, professor in the psychiatry department at McGill University and senior researcher at the Lady Davis Institute of the Jewish General Hospital.
"Claims that the mental health of most people has deteriorated significantly during the pandemic have been based primarily on individual studies that are 'snapshots' of a particular situation, in a particular place, at a particular time. They typically don't involve any long-term comparison with what had existed before or came after."
By doing an overview of studies from around the world with data about the mental health of various populations, both prior to the pandemic and during COVID-19, the researchers found that there was little change in the mental health of most of the populations studied.
"This is by far the most comprehensive study on COVID-19 mental health in the world, and it shows that, in general, people have been much more resilient than many have assumed," says first author Ying Sun, a research coordinator from the Lady Davis Institute.
Some women experienced a worsening of symptoms–whether of anxiety, depression, or general mental health. This could be due to their multiple family responsibilities, working in health care or elder care, or, in some cases, family violence.
"This is concerning and suggests that some women, as well as some people in other groups, have experienced changes for the worse in their mental health and will need ongoing access to mental health support," says Danielle Rice, an assistant professor at McMaster University and St. Joseph's Hospital in Hamilton, Ontario.
"The Canadian federal and provincial governments along with governments elsewhere in the world have worked to increase access to mental health services during the pandemic, and should ensure that these services continue to be available."
"Our findings underline the importance of doing rigorous science—otherwise, our expectations and assumptions, together with poor-quality studies and anecdotes, can become self-fulfilling prophecies," says Thombs.
The McGill University and Lady Davis Institute team is continuing to update their findings as research accumulates to look at mental health across different time periods in the pandemic.
They are also looking at what governments and health agencies can do to ensure that researchers have access to better-quality and more timely mental health data going forward so that our health systems can gather information that will allow them to target mental health resources to people who need them most.
- Among studies of the general population, no changes were found for general mental health or anxiety symptoms.
- Depression symptoms worsened by minimal to small amounts for older adults, university students, and people who self-identified as belonging to a sexual or gender minority group, but not for other groups.
- For parents, general mental health and anxiety symptoms were seen to worsen, although these results were based on only a small number of studies and participants.
- The findings are consistent with the largest study on suicide during the pandemic, which included monthly data from official government sources on suicide occurrences from 21 countries between 1 January 2019 or earlier to 31 July 2020 and found no evidence of a statistically significant increase in any country or region; statistically significant decreases did, however, occur in 12 countries or regions.
Additional coauthors are from McMaster University, the University of Toronto, McGill University, and other institutions.
Source: McGill University
The post COVID has actually had limited effect on mental health appeared first on Futurity.
According to one conspiracy-minded Congressional Republican, there's a chance that his employer is, at this very moment, "reverse-engineering" alien technology salvaged from UFOs.
In an eyebrow-raising interview with Newsweek — a magazine, which has been printing misinformation for years now — Tennessee Congressman Tim Burchett said that he thinks the US government has "recovered a craft at some point, and possible beings," alluding to a number of high-altitude balloons that were shut down last month.
"I think that a lot of that's being reverse-engineered right now," Burchett said, "but we just don't understand it."
The twice-reelected member of the House of Representatives also predicted an uptick in UFO sightings going forward, though he didn't expound on that statement.
In fact, Burchett didn't offer up much in terms of evidence to back up his claims. Meanwhile, in light of the recent sightings and subsequent takedowns, the White House's messaging has been clear.
"I don't think the American people need to worry about aliens," White House National Security Council spokesperson John Kirby told reporters during a briefing last month.
It's far from the first time Burchett has made statements suggesting that the US government has knowledge of extraterrestrial life.
After the House Intelligence Subcommittee hosted its first hearings on UFOs in more than half a century last May, the Tennessee politician claimed that the government's "cover-up continues."
"The people that are out there concerned about it, that have contacted me from all over the world, are very interested," he said during the hearing.
Last month, Burchett told infamous ex-congressman Matt Gaetz that the source of his belief is anecdotal and intuitive.
"Too many people in the know have told me that, and that we had to do something with these multiple craft that have crashed and we do not have the technology," he said during an appearance on Gaetz's podcast. "I just believe it in my heart."
Known for his outlandish remarks about everything from American senators working for TikTok to the origins of the COVID-19 pandemic, Burchett also commented last month about one of the still-unidentified objects shot down off the coast of Alaska earlier in the year, though in that case, he believes the conspiracy points to China — and not aliens.
While the concept of the US government recovering alien tech and subsequently covering it up arguably isn't inherently a partisan issue, the theory, paired with Burchett's other public statements, makes it seem more like he's looking to score political points rather than get to the truth.
It's either that or belief in UFOs and extraterrestrial technologies is becoming more mainstream than any of us have realized.
More on UFOs: The Military Spent $1.5 Million Shooting Down Those Three UFOs
The post Congressman Claims the US Government Has "Reverse-Engineered" Alien Tech from UFOs appeared first on Futurism.
The world's preeminent linguist has spoken — and he seems mighty tired of everyone's whining about artificial intelligence as it stands today.
In an op-ed for the New York Times, Noam Chomsky said that although the current spate of AI chatbots such as OpenAI's ChatGPT and Microsoft's Bing AI "have been hailed as the first glimmers on the horizon of artificial general intelligence" — the point at which AIs are able to think and act in ways superior to humans — we absolutely are not anywhere near that level yet.
"That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments," the Massachusetts Institute of Technology cognitive scientist mused.
"However useful these programs may be in some narrow domains," Chomsky notes, there's no way that machine learning as it is today could compete with the human mind.
Headlines about AI coming for our jobs and taking over our future are, as the public intellectual writes, like something out of a tragicomedy by Argentinian writer Jorge Luis Borges — and should be taken as such.
"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question," Chomsky expounds. "On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."
While currently available AI chatbots may seem to mimic human creativity and ingenuity, they are doing so only based on statistical probability, and not as a result of the kind of deeper knowledge and understanding that belies all human thought processes and are thusly "stuck in a prehuman or nonhuman phase of cognitive evolution," Chomsky argued.
"Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round," Chomsky notes. "They trade merely in probabilities that change over time."
"For this reason," he concluded, "the predictions of machine learning systems will always be superficial and dubious."
In other words, the concept that these AIs will take over the world is, with that absolute lack of human-like understanding of how the world works, impossible.
More on outlandish AI predictions: Stable Diffusion CEO Suggests AI Is Peering Into Alternate Realities
The post Noam Chomsky: AI Isn't Coming For Us All, You Idiots appeared first on Futurism.
Scientists call for collective effort to protect Earth's orbit from dangers posed by space junk
Satellite makers and operators must be held responsible for the growing hazard of space debris, according to experts who say a legally binding global treaty must be thrashed out to protect the orbital environment.
With the number of satellites rising dramatically, the agreement would make manufacturers and users responsible for de-orbiting defunct hardware and cleaning up any debris created when orbiting objects slam into one another.Continue reading…
Nature Communications, Published online: 09 March 2023; doi:10.1038/s41467-023-36981-4
The cooling in the Pacific Ocean has gone on for three years. Its end is usually good news for the U.S. and other parts of the world, including drought-stricken northeast Africa, scientists said.
(Image credit: Matthew Brown/AP)
Heads up, movie heads and the Elon Musk-curious: there's an upcoming documentary about the controversial billionaire, and it's directed by Alex Gibney, the Oscar-winning filmmaker behind "Going Clear: Scientology and the Prison of Belief."
But even though the project isn't out yet and lacks a release date or confirmed title, Musk has nonetheless taken to Twitter to preemptively decry the movie.
"It's a hit piece," Musk tweeted in response to a tweet about the film's announcement.
Gibney, though, wasn't going to take the remark lying down.
"How would you know?" he rejoined. Musk has yet to respond.
But given his kneejerk reaction, it sounds like Musk might be worried. And to be fair, he has good reason to be.
The film's studio described it as a "definitive and unvarnished examination" of Musk, according to Variety. In other words, the documentary will be a far cry from a hagiographic biopic, which is to be expected if you're vaguely familiar with Gibney's filmography.
Take, for example, an earlier documentary of his: "Enron: The Smartest Guys in the Room," which examined corporate fraud perpetrated by high-ranking executives at the company.
Gibney has also tackled a tech industry billionaire before: disgraced biotech entrepreneur Elizabeth Holmes, in his 2019 documentary "The Inventor: Out for Blood in Silicon Valley."
Musk, a tech CEO whose business practices have come under scrutiny at pretty much every company he helms, seems like a natural target for Gibney, especially now that Musk has forced himself further into the limelight with his acquisition of Twitter.
Indeed, as the richest man in the world, Musk continues to both polarize onlookers and amass legions of fans who won't allow his image to be tarnished.
"Now is the moment for a rigorous portrait of Elon Musk, who is undeniably one of the most influential figures of our time," said Zhang Xin, founder of the production company Closer Media, which is producing the film, as quoted by Variety.
More on Elon Musk: Elon Musk Finally Relents and Apologizes for Mocking Disabled Twitter Ex-Employee
The post Elon Musk Calls Movie About Him That Isn't Even Out Yet a "Hit Piece" appeared first on Futurism.
While it is generally accepted that exercise can benefit a person's overall health, a new study finds a direct link between muscle contraction and a reduction in
As reported in the journal Frontiers in Physiology, a currently unspecified factor released during exercise suppresses signaling within breast cancer cells, which reduces tumor growth and can even kill the cancerous cells.
"For this study, we took a deeper look into the relationship between people who exercise more and have less of a risk of cancer; previously, it was believed that there wasn't anything mechanistically linked. Rather, it was just the general benefits seen in your body because of a healthy lifestyle," says first author Amanda Davis, a clinical assistant professor at the Texas A&M School of Veterinary Medicine & Biomedical Sciences (VMBS).
"These data are exciting because they show that during muscle contraction, the muscle is actually releasing some factors that kill, or at least decrease the growth of, neoplastic (abnormal, often cancerous) cells."
'Get up and move'
The researchers also found that the factors inherently reside in muscle and are released into the bloodstream no matter what a person's usual activity level is or how developed their muscles are.
"Our results suggest that whether you consistently exercise or you just get up and walk when you're not used to working out, these factors are still being released from the muscle," Davis says. "Even simple forms of muscle contraction, whether it be going on a walk or getting up to dance to your favorite song, may play a role in fighting breast cancer.
"The big message is to get up and move," she continues. "You don't have to be an Olympic-level athlete for these beneficial effects to occur during muscle contraction; being physically fit doesn't make you more likely to release this substance."
To measure the level of factors released by exercised muscle, Davis trained rats to complete a moderate intensity exercise program consistent with the American College of Sports Medicine's recommendations for people.
"They ran on treadmills for five weeks and we gradually increased the incline," she says.
Although Davis' team could not identify an exact minimum muscle contraction time necessary for the effect, they did note that the longer the contraction session lasted, the more factors were released.
Disrupting cancer cell communication
Based upon the study results, her general advice for promoting the release of the factors is to follow the protocols recommended by the American College of Sports Medicine—namely, 30 minutes a day of moderate intensity exercise for at least five days a week. This could include brisk walking, dancing, or biking, according to the American Heart Association.
Regular exercise could not only lead to disrupted communication in the cancerous cells to stop their growth, but the factors released by exercise may also play a role in preventing breast cancer's development in the first place.
"The decreased risk of breast cancer with exercise comes from the idea that if you have pre-neoplastic cells and you're exercising a lot and slowing their growth, maybe those precancerous cells can be destroyed by the body before they start taking over," Davis says.
Further studies are being conducted to determine the exact identity of the factors being released by muscle. Davis suggests that they could be peptides called myokines released by muscle fibers. Researchers currently in the kinesiology department at are looking into the possibility of the factors being microRNAs or other novel molecules.
Because Davis' research also found that the presence of albumin was necessary for the beneficial effects of exercise to occur, she believes that whatever the factors are, they are carried through the blood by albumin, a common carrier protein produced in the liver.
Benefits for other
Additional research is needed to clarify if resistance exercise, like lifting weights, has the same effect as aerobic exercise. Activating larger muscle groups, as seen in resistance exercise, may lead to an increased stimulatory effect, she says.
Davis' work focused on the luminal A line of breast cancer, the most common type that makes up approximately 60% of breast cancer cases. She saw similar, but more varied, effects with other types of breast cancer and with different cell lines.
While the beneficial effects of exercise are also strongly correlated with decreased risk of prostate and colon cancers, there is still much work to be done in identifying which cancers and their subtypes will respond best to exercise.
"These are definitely exciting data we have concerning exercise and breast cancer," Davis says. "However, exercise is not a 100% guarantee. Further research in this area will help to identify why some people who work out regularly are still diagnosed with cancer.
"There have been many different signaling pathways indicated in cancer development," she continues. "Therefore, more studies concerning what pathways are influenced by exercise will be needed to determine which types of cancers would benefit from exercise and which types would not."
In addition, there are many other confounding factors that affect a person's risk of getting cancer, like smoking, age, genetics, and other comorbidities.
Source: Megan Myers for Texas A&M University
The post Just a little exercise can lower breast cancer risk appeared first on Futurity.
Scientists have been digging up the remains of ancient plants and animals since time immemorial, but viruses? Jean-Michel Claverie from the Aix-Marseille University School of Medicine has spent the last 20 years searching deep permafrost deposits for preserved ancient viruses. His team recently revived a virus that had been dormant for almost 50,000 years. It might sound like the setup for a post-apocalyptic movie, but Claverie believes it's in our best interest to know what's lurking down there.
This isn't the first time Claverie has awoken an ancient virus. He and his team first managed this in 2014 when they isolated a 30,000-year-old virus from permafrost and infected cultured cells. For safety, Claverie has focused on viruses that only infect single-celled amoebas. The following year, the team did the same with another viral strain. The most recent publication from Claverie's team details 13 newly isolated viruses, including the oldest ever revived.
Most of the viruses in the study are extremely large by viral standards, some up to two micrometers in length (the same size range as an E. coli bacterium cell). They belong to genera, including Pandoravirus (like the one above), Megavirus, and Pacmanvirus. The oldest organism was Pandoravirus yedoma, which was frozen in permafrost for 48,500 years according to radiocarbon dating of the surrounding soil. The viruses infect even bigger amoeba cells, which the team provided to see if the particles were viable. The study describes how the thawed viruses happily invaded the cultured amoeba cells and, in hours, turned them into factories to produce more ancient viruses.
Claverie tells CNN he worries that people see his research on ancient viruses as a curiosity, but there's a lesson here. This research focuses on viruses that only infect amoebas rather than plants or animals, but there are undoubtedly viruses preserved in permafrost that would love to set up shop in animal cells — possibly even humans. Claverie's samples come from Siberian ice cores, many gathered at more than 50 feet (16 meters). However, permafrost is much less permanent in the face of climate change.
As Earth warms, we are losing permafrost across higher latitudes. It's plausible that viruses preserved in permafrost could become active again without a scientist's help — a so-called "spillover event." The new study shows us that a 50,000-year-old virus is still viable. Perhaps even older viruses could awaken as permafrost thaws, which could have unknown consequences for an ecosystem that hasn't seen these organisms in thousands of years. So, add that to the list of potentially catastrophic outcomes of climate change.
By most accounts, I'm a reasonable, levelheaded individual. But some days, my phone makes me want to hurl it across the room. The problem is autocorrect, or rather autocorrect gone wrong—that habit to take what I am typing and mangle it into something I didn't intend. I promise you, dear iPhone, I know the difference between its and it's, and if you could stop changing well to we'll, that'd be just super. And I can't believe I have to say this, but I have no desire to call my fiancé a "baboon."
It's true, perhaps, that I am just clumsy, mistyping words so badly that my phone can't properly decipher them. But autocorrect is a nuisance for so many of us. Do I even need to go through the litany of mistakes, involuntary corrections, and everyday frustrations that can make the feature so incredibly ducking annoying? "Autocorrect fails" are so common that they have sprung endless internet jokes. Dear husband getting autocorrected to dead husband is hilarious, at least until you've seen a million Facebook posts about it.
Even as virtually every aspect of smartphones has gotten at least incrementally better over the years, autocorrect seems stuck. An iPhone 6 released nearly a decade ago lacks features such as Face ID and Portrait Mode, but its basic virtual keyboard is not clearly different from the one you use today. This doesn't seem to be an Apple-specific problem, either: Third-party keyboards can be installed on both iOS and Android that claim to be better at autocorrect. Disabling the function altogether is possible, though it rarely makes for a better experience. Autocorrect's lingering woes are especially strange now that we have chatbots that are eerily good at predicting what we want or need. ChatGPT can spit out a passable high-school essay while autocorrect still can't seem to consistently figure out when it's messing up my words. If everything in tech gets disrupted sooner or later, why not autocorrect?
[Read: The end of high-school English]
At first, autocorrect as we now know it was a major disruptor itself. Although text correction existed on flip phones, the arrival of devices without a physical keyboard required a new approach. In 2007, when the first iPhone was released, people weren't used to messaging on touchscreens, let alone on a 3.5-inch screen where your fingers covered the very letters you were trying to press. The engineer Ken Kocienda's job was to make software to help iPhone owners deal with inevitable typing errors; in the quite literal sense, he is the inventor of Apple's autocorrect. (He retired from the company in 2017, though, so if you're still mad at autocorrect, you can only partly blame him.)
Kocienda created a system that would do its best to guess what you meant by thinking about words not as units of meaning but as patterns. Autocorrect essentially re-creates each word as both a shape and a sequence, so that the word hello is registered as five letters but also as the actual layout and flow of those letters when you type them one by one. "We took each word in the dictionary and gave it a little representative constellation," he told me, "and autocorrect did this little geometry that said, 'Here's the pattern you created; what's the closest-looking [word] to that?'"
That's how it corrects: It guesses which word you meant by judging when you hit letters close to that physical pattern on the keyboard. This is why, at least ideally, a phone will correct teh or thr to the. It's all about probabilities. When people brand ChatGPT as a "super-powerful autocorrect," this is what they mean: so-called large language models work in a similar way, guessing what word or phrase comes after the one before.
When early Android smartphones from Samsung, Google, and other companies were released, they also included autocorrect features that work much like Apple's system: using context and geometry to guess what you meant to type. And that does work. If you were to pick up your phone right now and type in any old nonsense, you would almost certainly end up with real words. When you think about it, that's sort of incredible. Autocorrect is so eager to decipher letters that out of nonsense you still get something like meaning.
Apple's technology has also changed quite a bit since 2007, even if it doesn't always feel that way. As language processing has evolved and chips have become more powerful, tech has gotten better at not just correcting typing errors but doing so based on the sentence it thinks we're trying to write. In an email, a spokesperson for Apple said the basic mix of syntax and geometry still factors into autocorrect, but the system now also takes into account context and user habit.
And yet for all the tweaking and evolution, autocorrect is still far, far from perfect. Peruse Reddit or Twitter and frustrations with the system abound. Maybe your keyboard now recognizes some of the quirks of your typing—thankfully, mine finally gets Navneet right—but the advances in autocorrect are also partly why the tech remains so annoying. The reliance on context and user habit is genuinely helpful most of the time, but it also is the reason our phones will sometimes do that maddening thing where they change not only the word you meant to type but the one you'd typed before it too.
In some cases, autocorrect struggles because it tries to match our uniqueness to dictionaries or patterns it has picked out in the past. In attempting to learn and remember patterns, it can also learn from our mistakes. If you accidentally type thr a few too many times, the system might just leave it as is, precisely because it's trying to learn. But what also seems to rile people up is that autocorrect still trips over the basics: It can be helpful when Id changes to I'd or Its to It's at the beginning of a sentence, but infuriating when autocorrect does that when you neither want nor need it to.
That's the thing with autocorrect: anticipating what you meant to say is tricky, because the way we use language is unpredictable and idiosyncratic. The quirks of idiom, the slang, the deliberate misspellings—all of the massive diversity of language is tough for these systems to understand. How we text our families or partners can be different from how we write notes or type things into Google. In a serious work email, autocorrect may be doing us a favor by changing np to no, but it's just a pain when we meant "no problem" in a group chat with friends.
[Read: The difference between speaking and thinking]
Autocorrect is limited by the reality that human language sits in this strange place where it is both universal and incredibly specific, says Allison Parrish, an expert on language and computation at NYU. Even as autocorrect learns a bit about the words we use, it must, out of necessity, default to what is most common and popular: The dictionaries and geometric patterns accumulated by Apple and Google over years reflect a mean, an aggregate norm. "In the case of autocorrect, it does have a normative force," Parrish told me, "because it's built as a system for telling you what language should be."
She pointed me to the example of twerk. The word used to get autocorrected because it wasn't a recognized term. My iPhone now doesn't mess with I love to twerk, but it doesn't recognize many other examples of common Black slang, such as simp or finna. Keyboards are trying their best to adhere to how "most people" speak, but that concept is something of a fiction, an abstract idea rather than an actual thing. It makes for a fiendishly difficult technical problem. I've had to turn off autocorrect on my parents' phones because their very ordinary habit of switching between English, Punjabi, and Hindi on the fly is something autocorrect simply cannot handle.
That doesn't mean that autocorrect is doomed to be like this forever. Right now, you can ask ChatGPT to write a poem about cars in the style of Shakespeare and get something that is precisely that: "Oh, fair machines that speed upon the road, / With wheels that spin and engines that doth explode." Other tools have used the text messages of a deceased loved one to create a chatbot that can feel unnervingly real. Yes, we are unique and irreducible, but there are patterns to how we text, and learning patterns is precisely what machines are good at. In a sense, the sudden chatbot explosion means that autocorrect has won: It is moving from our phones to all the text and ideas of the internet.
But how we write is a forever-unfinished process in a way that Shakespeare's works are not. No level of autocorrect can figure out how we write before we've fully decided upon it ourselves, even if fulfilling that desire would end our constant frustration. The future of autocorrect will be a reflection of who or what is doing the improving. Perhaps it could get better by somehow learning to treat us as unique. Or it could continue down the path of why it fails so often now: It thinks of us as just like everybody else.
Certain chemotherapy-resistant ovarian
cells send protect neighboring cancer cells by sending signals that induce resistance, according to a new study.
The finding may help explain why ovarian cancer patients respond poorly to chemotherapy or relapse after treatment.
For the study, published in Clinical Cancer Research, researchers investigated chemotherapy-resistant cancer cells called quiescent cells. As chemotherapy primarily targets rapidly dividing cells, quiescent cells are resistant because they divide slowly.
The researchers found that quiescent cells secrete a protein called follistatin that prompts neighbors to become resistant to chemotherapy too. By targeting this protein, they improved response to chemotherapy and boosted survival in a mouse model of aggressive ovarian cancer, paving the way for future human clinical trials.
"I think about quiescent cancer cells like the yellow center of a daisy and neighboring cells as the surrounding white petals," says Ronald Buckanovich, professor of medicine at the University of Pittsburgh and co-director of the Women's Cancer Research Center, a collaboration between UPMC Hillman Cancer Center and Magee-Womens Research Institute.
"In response to chemotherapy, quiescent cells secrete follistatin that acts like a signal to protect the whole flower. When chemotherapy stops, follistatin levels drop and cells start proliferating again, almost like a barometer that said, 'Conditions are good to grow.' This might explain why
often come back so quickly."
Ovarian cancer is the deadliest form of gynecologic cancer in the US. More than 70% of patients treated for this disease will have the cancer return, and it is rarely curable in this form. There is an urgent need for therapies to combat resistant cancer cells and reduce recurrence rates, Buckanovich says.
In the new study, Buckanovich and his team found that quiescent cells ramp up production of follistatin in response to chemotherapy drugs in both lab-grown human cells and mice.
"The most exciting thing about this study was the fact that we saw this incredible response to chemotherapy in patients within 24 hours."
Next, they showed that quiescent cells halt the growth of actively dividing cancer cells, making them resistant to chemotherapy drugs. When they blocked follistatin with an antibody, this effect was lost, demonstrating that follistatin drives chemotherapy resistance.
"We thought that quiescent cells would produce factors to make themselves resistant to chemotherapy, but the fact they also protect their neighbors and amplify chemoresistance was surprising," says Buckanovich. "If some of these neighbors learn to be quiescent themselves, which in turn protect their own neighbors, more and more resistant cells will persist and lead to cancer recurrence."
To further confirm the role of follistatin in driving chemoresistance, the team genetically deleted the gene encoding follistatin in tumor cells that initiate an aggressive and incurable form of ovarian cancer in mice. The results were dramatic: After chemotherapy, 30% of mice with tumors lacking follistatin were cured, while all mice with normal tumors died.
Next, the team analyzed Cancer Genome Atlas data from hundreds of ovarian cancer patients. They showed that higher levels follistatin levels were associated with worse survival rates, indicating that follistatin is also relevant in people.
Finally, they compared samples from ovarian cancer patients before and after chemotherapy. Follistatin levels doubled or tripled in just 24 hours after treatment.
"To me, the most exciting thing about this study was the fact that we saw this incredible response to chemotherapy in patients within 24 hours," says Buckanovich. "These data reinforce our findings in mice and suggest that follistatin is a new target to improve ovarian cancer response to chemotherapy."
Buckanovich is now working with Pitt's Center for Antibody Therapeutics to develop antibodies for follistatin in humans with the eventual goal of moving this approach into clinical trials.
"If we're able to reverse chemoresistance and fewer patients relapse, we might be able to increase cure rates," he says. "Even if this approach works for 20% of patients, that would be huge because approximately 14,000 patients each year are dying from ovarian cancer."
According to Buckanovich, other recent research has suggested that follistatin also drives immunotherapy resistance in ovarian cancer, suggesting that an antibody targeting this protein could potentially be used to augment both chemotherapy and immunotherapy.
The team also plans to investigate how follistatin causes chemotherapy resistance in cancer cells. Blocking these signals by developing new drugs or repurposing existing drugs could be another promising avenue to improve treatments for ovarian cancer in the future.
The Ovarian Cancer Research Alliance, the Department of Defense, and the National Institutes of Health funded the work.
Source: University of Pittsburgh
The post Ovarian cancer cells shield neighbors from chemotherapy appeared first on Futurity.
Scientific Reports, Published online: 09 March 2023; doi:10.1038/s41598-023-31137-2Population trends of striped hyena (Hyaena hyaena) in Israel for the past five decades
Relativity Space initially planned to launch the rocket Wednesday, March 8. Observers eagerly watched the company's YouTube livestream, vapor swirling around the 35-meter rocket. After an hour and 40 minutes, however, Terran 1 was still sitting atop its Cape Canaveral launchpad, and commentators announced Relativity Space's decision to push back the launch.
"As you saw, we unfortunately scrubbed for today," one commentator, Relativity Space infrastructure project manager Arwa Tizani, said. "While we obviously had high hopes for sending our Terran 1 off today, we're going to continue to take a measured approach so we can ultimately see this rocket off to max q and beyond."
Relativity Space quickly followed up with a tweet saying it had been forced to push back Terran 1's launch after it exceeded Stage 2's launch commit criteria limits for propellant thermal conditions. "When using liquid natural gas, the methane needs time to get to the right concentration," Relativity Space explained. "This is why our next attempt will be a few days from now." The company now plans to attempt the launch between 1 and 4 p.m. on Saturday, March 11.
Relativity Space first began developing Terran 1 back in 2017. Today, the expendable two-stage small-lift launch vehicle is capable of lifting up to 1,250 kilograms (2,755 pounds) into low-Earth orbit (LEO). But its first flight won't carry a payload: Relativity Space just wants to see Terran 1 touch space to consider the rocket a success.
What makes Terran 1 so special, though, is how it's manufactured. Relativity Space says 85% of the rocket by mass is 3D printed, including its engines, which run on liquid oxygen (LOX) and liquid natural gas (LNG). The company's printers, collectively called Stargate, are said to be the biggest metal 3D printers in the world and can bring Terran 1 from raw material to flight in just 60 days.
Once Terran 1 finally gets off the ground, Relativity Space plans to shift its focus toward Terran R, an entirely 3D-printed rocket that can be reused. Terran R, which will be capable of bringing 20,000 kilograms to LEO, will hopefully launch from Florida's Space Coast sometime in 2024.
Scientists have decided to resurrect ancient "zombie" viruses found in Siberia's permafrost in an effort to head off one of climate change's freakiest eventualities.
As CNN reports, the gambit is being conducted by French medical and genomics researcher Jean-Michel Claverie, who's testing whether or not a 48,500-year-old "zombie" virus he found could be reactivated as climate change causes its surroundings to melt.
Claverie, who is professor emeritus at the Aix-Marseille University School of Medicine, has spent years studying what he calls "giant" viruses, which can be seen with regular microscopes rather than the kinds used to see the ones we're used to today.
With climate change causing unprecedented permafrost melting, scientists like Claverie are concerned that there could be a "spillover" event in which viruses like the seven he's detected, jump hosts.
And the risk to humans isn't necessarily zero, although researchers still don't know if these viruses can technically infect humans, let alone make them sick.
"You must remember our immune defense has been developed in close contact with microbiological surroundings," Birgitta Evengård, professor at Umea University in Sweden, who was not involved in the research, told CNN.
"If there is a virus hidden in the permafrost that we have not been in contact with for thousands of years, it might be that our immune defense is not sufficient," she said. "It is correct to have respect for the situation and be proactive and not just reactive. And the way to fight fear is to have knowledge."
So far, Claverie has twice been able to revive viruses preserved in ice, first in 2014, when he and his team thawed a 30,000-year-old permafrost virus and observed it infecting a single-cell amoeba, and again in 2015, when they isolated a different virus type that infected amoebas.
In his latest study, published last month in the journal Viruses, Claverie and his team detailed their discovery of five additional strains, the oldest of which clocks in at 48,500 years old, that were able to infect amoebas — and he's concerned about what these findings may represent for humans and animals.
"We view these amoeba-infecting viruses as surrogates for all other possible viruses that might be in the permafrost," the French scientist told CNN. "We see the traces of many, many, many other viruses, so we know they are there."
To be clear, Claverie did admit that he and his team "don't know for sure" if the other viruses echoed in his research thus far are still alive, but it nevertheless still gives him pause.
"Our reasoning is that if the amoeba viruses are still alive," he explained, "there is no reason why the other viruses will not be still alive, and capable of infecting their own hosts."
As scary as that prospect is, it represents just another potential outcome of the uncontrolled doomsday scenarios we face with our rapidly-changing climate.
More on climate change horror stories: Scientists Send Robot Under Doomsday Glacier, Alarmed by What It Found
The post We're Totally OK With This 48,500 Year Old "Zombie" Virus Being Resurrected appeared first on Futurism.
Water, as we're sure it's not terribly surprising to hear, is old. But according to a fascinating new paper published in the journal Nature, it might be just a little bit older than we previously thought.
"We can now trace the origins of water in our Solar System to before the formation of the Sun," John Tobin, the study's lead author and an astronomer at the US National Radio Astronomy Observatory, said in a press release.
That's right. According to this research, water is likely older than our Sun — and the secret, they say, was discovered in another star.
The astonishing conclusion suggests that the presence of water is far more widespread in the universe and that even the water in our planet's oceans may have come from much further away than initially thought.
The star in question, V883 Orionis, is a relatively young — and still growing — cosmic body that's roughly 1,300 lightyears away from the Earth.
As the researchers explain in their study, V883 Orionis is surrounded by a disk-like cloud of cosmic matter. During the birth of a star, a wide-ranging scope of interstellar material gets sucked into the birthing star's vortex. The disk surrounding V883 Orionisis is comprised of that sucked-up material, and once the star stops growing, the matter from the disk will one day turn into surrounding bodies like comets, asteroids, and ultimately, planets.
Using the Atacama Large Millimeter/submillimeter Array (ALMA) radio telescope in Chile, the scientists were able to study the disc's chemical makeup and discovered a lot of water, more than 1,200 times the amount found in all of the Earth's oceans combined.
By analyzing the isotopes of hydrogen atoms present in the water, they were able to conclude that it likely formed before our Sun was even born.
In fact, the team estimates that as much as half of the Earth's water may have existed before the solar system was formed 4.5 billion years ago.
"V883 Orionis is the missing link in this case," said Tobin. "The composition of the water in the disk is very similar to that of comets in our own Solar System."
"This is confirmation of the idea," he added, "that the water in planetary systems formed billions of years ago, before the Sun, in interstellar space, and has been inherited by both comets and Earth, relatively unchanged."
READ MORE: Astronomers Traced The Origins of Water to a Time Before The Sun [Science Alert]
The post Water Existed Long Before the Solar System, Astronomers Find appeared first on Futurism.
|submitted by /u/Sirisian
There also this thing in my mind leading toward singularity, you will see people become more isolated in the age of generative media. Ai will be the dark energy that inflate distance between people through pure information, or ddos of content. Fan communities getting smaller because there just so many games, movies, books that generated in a dime. We no longer see the same things. Everyone play games tailored to them with bot that act like real people. if you go online every word you see, every picture, every video, every novels are made by few ai company or even government carefully curated to make sure you buy their stuff and don't seek out "real people".
Why chat with people online if your AI friends give you more immerse and better experience. Why work with people irl if you got UBI, why go to shopping when a drone send your stuff. If you are creative person, you don't need getting judged but those mean strangers. you can upload your text 2content to a site where a millions of ai fans give you the praise and validation of ticktock star. if you want to be educated there is a AI teacher, if you sad, you talk to a AI therapist. if you are still lonely, you can get AI soul mate with life like body and make AI kids to carry on your legacy..
and then maybe nurolink or vr will be sophisticated to visit otherworldly place with your AI mates/family with all 5 sense.
and when you go outside, real people are now scary, selfish and boring. They could hurt you, they never know your needs unlike your AI frends. they never have the patience to deal with your problems. all they do is getting in your way.
In the end of the day maybe for someone born 10 years from now, from life to death most of their interaction from intelligent beings are no longer the member of the same species.
|submitted by /u/filosoful
|submitted by /u/thebelsnickle1991
I've been thinking about this a lot recently.
With the US diminishing on the world stage (Ukraine conflict notwithstanding) and facing growing domestic difficulties, politically, economically and environmentally, I strongly fear that it won't be a pleasant nation to live in within 5-10 years. And that's not even discussing what could happen if it devolves into a dictatorship or civil war, both of which seem exponentially more likely now than 10 years ago.
The same goes for Russia and China where the demographics and economy are on track to catastrophically fail on a level where the state won't be able to suppress the chaos. For Russia moreso than China, but both nations have this looming over them.
The UK is seeing poverty rising at unprecedented rates while their economy is contracting almost as much as Russia, forecasted to be overtaken by Poland by 2030. Politicians openly discussing withdrawing from the European Convention on Human rights which, combined with growing poverty and dissatisfaction, opens all kinds of pandora's boxes.
The EU as a whole is also in serious trouble economically even if the impact won't be as rough as in the UK. Global warming will hit several central/western european countries *hard* if the past 5 years are anything to go by.
Ponder that you're an english speaking high skilled worker with liberal values and no family to worry about. You may or may not belong to any minorities/have disabilities. Where do you go if you want to live in a place where you can be sure that your quality of life and political situation won't see a serious risk of rapid deterioration in the coming 10-20 years?
|submitted by /u/DisasterousGiraffe
|submitted by /u/rherbom2k
|submitted by /u/landlord2213
What is it, and how does it work? This seems kind of elementary and obvious, but I realize that I don't understand it.
I wanted to see if anyone knows of any literature that attempts to investigate the mechanisms behind brain racking (the recall strategy). It's possible that it falls in the category of "metacognitive regulation", but the stuff I found is mostly superficially descriptive at best.
I was wondering if brain racking as a strategy could potentially be implemented with LLMs for iterative embedding refinement, so we can get more robust results in tricky situations. If we were to implement that, it would be nice if we could rely on some prior data.
Do you guys have any pointers or thoughts on the matter?