This is a sentimental story about
, a little Twitterbilly elegy. I spilled tears, heavy Patsy Cline tears, for the platform for the first time a few weeks ago, during a walk with Amanda Guinzburg, a writer and photographer I'd long followed on Twitter for her excellent tweets about American politics and photos of libidinous flowers.
Guinz—her Twitter handle—and I had never met face-to-face, but with the arsonous new management torching the platform's vibe, we had decided to stroll together in Brooklyn Bridge Park and slag Elon Musk. Before Musk took over, you went to Twitter to satirize the high-hats, while also learning and teaching. But Musk seemed to think Twitter was chiefly for propaganda, self-aggrandizement, and enemy-smiting. He never took the time to loiter and banter and approach new subjects with equanimity, curiosity, amusement. Yes, he had long waxed anti-vax and hammered away at edgelord palaver, but what did it (for me, anyway) came on October 30, when he amplified some truly twisted and false cruelty implying that Nancy Pelosi's husband, Paul, had solicited sex from the QAnon-promoting intruder who cracked his skull with a hammer.
The joy left Mudville. It hasn't returned.
Today, Twitter feels more expired than evil. The company is worth less than half of what Musk paid when he bought it in October, according to the chief twit himself. The firm Bot Sentinel estimates that nearly 900,000 users, including some celebrities, deactivated their accounts in the week after Musk moved in, and as of January, more than 500 of Twitter's biggest advertisers had stopped spending on the platform. The market-research company Insider Intelligence predicts that by the end of next year some 32 million users will split, fed up with hate speech and tech glitches, the most visible consequences of Musk's prodigious layoffs. The people leaving the platform, Insider Intelligence further predicts, will be those unwilling to tolerate a "degrading experience."
[Rumman Chowdhury: I watched Elon Musk kill Twitter's culture from the inside]
I asked Guinz what she truly thought about the Twitter sunset. She said she wasn't quite ready to jump to an alternative platform like Mastodon. "I'm leaving it the way I have all my toxic relationships," she later wrote to me by email. "Slowly, head down, curtseying in reverse toward the door."
Me too. I had downloaded my tweet archive and deleted the app, but not my account. I had a U-Haul idling outside and was spending nights in my unfinished studio on Mastodon. But I was still returning to tweet.
On Thursday evening, with the news of former President Donald Trump's indictment, I reflexively popped over to Twitter, the way a person might turn to CBS for March Madness. After all, I grooved in many of my most compulsive social-media habits during that grim interval in American life: the Twitter heyday, when Trump's bleating presence on the platform made it the only sensible place to track, expose, and gallows-mock his designs on democracy.
The high-water mark for Twitter's relevance has to have been the Trump-pandemic slough, 2015 to 2022, when, at least for some of us, American "news" and Twitter were one and the same. The news quoted tweets—important figures were forever "taking to Twitter" as if to a pulpit in the sky—and tweets then hashed over the news, and thus gave the news more inches of copy, which in turn gave Twitter more characters. So much quote-tweeting and retweeting meant that the exact same words whipped around in a spinner until "word salad," a meme of the era, used to mock nonsense by exploitative gurus such as the NXIVM cult leader Keith Raniere and Trump, seemed to be more broadly apposite.
In 2018, Twitter even had its first profitable year. It seemed to have found its reason for being.
But that wasn't the Twitter I missed most. Instead, in reminiscing with Guinz, I cast back further, to Twitter's early years and the moment when I began to appreciate the fascination of the new microblogging service.
I signed up for Twitter in 2007, shortly after the service had been showcased at the South by Southwest festival in Austin. In those days, you still responded chiefly when you heard your name called: @jack, @ev, @page88. Jack Dorsey and Evan Williams, with Biz Stone and Noah Glass, were Twitter's founders. @page88 was—and I guess still is—me, my middle name and my birthday, August 8. Why did I choose a cutesy handle? Because Twitter looked like a porn shop or a payday-loan place to me, and I didn't want any of my antics there to get back to my bosses at The New York Times, where I then worked.
In 2009, I found myself at SXSW. The Times media critic David Carr (who died in 2015) was there too—and adept at tweeting. Later, when my mentions filled up with comments on the keynote interview I conducted—and some of them were, let's say, barbed—David counseled me to remember that Twitter is for twits. For sure. But some of the tweets were funny. At SXSW, the platform was put to use as an expansive but not limitless group text for making plans to get barbecue or see Metallica at Stubb's (good show). I realized then that I could endure a bit of hazing if it meant I'd get to listen in.
To my surprise, sometime after I got back to New York, a white checkmark in an Aquafresh-blue badge flickered up next to my handle. At first, I worried that it might signal a bench warrant of some new social-media kind. But a top editor at the Times told me it meant I was trustworthy, the way stars indicate that an eBay seller has a good track record. My boss also instructed me to use my real name on Twitter, tag the Times, mix it up with readers, and (above all) keep my voice. The newspaper in those innocent days was aiming for glasnost—or maybe noblesse oblige—when it came to the upstart platform that would later bedevil it. (Musk is set to begin erasing such "legacy" checkmarks today, making them available only to users who buy them.)
[Ian Bogost: Blue checkmarks were always shameless]
Not long after, I discovered an even more vital use for Twitter. I was pregnant in Austin, and a baby soon showed up to go with my existing 4-year-old. I've heard that some women are entirely fulfilled by motherhood. But I'm with Rachel Cusk, Kate Chopin, and Elena Ferrante in finding blessed babies maybe not 110 percent intellectually fulfilling. No need to rehash all the feminist truths here, but suffice it to say that I found nursing an infant and chasing a preschooler, while engrossing, not a perfect match for a decent set of wits. Almost everyone who tries this stuff tends to have some intellectual resources sort of left over. For me, that's where Twitter came in: a sop for cognitive surplus.
While pregnant, I had read books, but now that the new baby had brought its thrillingly condensed biorhythms into the household, life was a bit too fragmentary for sustained concentration. Anyway, I was already feeling internal. My heart was breaking in slow motion, in sync with my marriage. Something unlonesome was what I needed, but also something I could do from a rocking chair; I required a reminder that there were quick and even wise people in the world with ideas, quips, even lectures that would force me to learn something. I was also groaning under the weight of my own caringness, so the connection I sought would need to be bracing, nonintimate, and entirely unmaternal.
One of my earliest tweets, as I recall, was: "If they follow, will I lead?" Why did Twitter give people "followers" and not something neutral, like "contacts"? Later I decided the word was less cultlish than physical. On Twitter, you followed existence with your eyes as the text scrolled on—threads and feeds and trains of thought. The now-familiar user experience has been described many ways, several of them scary ("doomscrolling" in a "cesspool"), but my own experience of Twitter is something I simply call reading.
It was just the tonic I craved in those postpartum days. With educations and experiences different from my own, and areas of distinct expertise, people on Twitter were writing. Reporting what they had seen that I hadn't. Dilating on what they knew that I didn't. Even as years passed and army after army of trolls, like something out of Lord of the Rings, charged me with idiocy and worse, I could never see Twitter as sewage or snakes or the apocalypse. The ubiquitous claims that it "breeds toxicity" come with far more fury than metrics, and such claims rarely, if ever, mention the insight, polemic, conviviality, and fellowship the site also bred.
Over time, I acquired navigation skills. With some blocking finesse, I could tune out the threats I got from the far right, and—watch, I'll screw it up now—I never felt in reputational danger from the left, even when activists criticized points I made about gender or sex work. To discover you're wrong is to open up the possibility of being right, and being right is great. There was bliss, in those days, in watching someone I'd never heard of unfurl a fully formed aria that mixed memoir, reporting, and data—maybe something about voting in North Carolina, or abortion among Mormons, or the moment apartheid became inevitable in South Africa.
I would give that version of Twitter to the heroine in "The Yellow Wallpaper," Charlotte Perkins Gilman's story about a woman whom men have confined to a disturbingly ugly room for the crime of having ideas. It was when I told this to Guinz that I started to cry. "I think it could have kept her from going insane," I said. "I think it saved me from going insane." Later, I'd imagine sending the Twitter app back fully 124 years, to Edna Pontellier in Kate Chopin's The Awakening. Edna would surely miss Robert less and divorce Léonce more if she knew that she could tweet her analysis of the feminine mystique at @rgay and @pussyrrriot, and probably—she's Edna Pontellier!—get some superb replies.
And oh how I wish I could shoot Twitter to Olga in Elena Ferrante's The Days of Abandonment, the novel that surely contains the greatest-ever depictions of mothering under pressure. As a source of reality in the form of news, and strength in the form of human contact, Twitter might have been just the thing for Olga. At the very least, she might have DMed a friend to break down her door, bring her a babysitter, and order her a grappa.
[From the December 2018 issue: An open letter to Elena Ferrante—whoever you are]
The nameless narrator in "The Yellow Wallpaper," forbidden to work, remarks that if only she could write, she might be able relieve "the press of ideas." What a great phrase. The press of ideas is what I also felt in my kids' early years, and a deep curiosity about the ideas pressing inside other isolated skulls.
Fifteen years is an eternity in social media. That lively, chatty, tart-but-good-faith Twitter had an excellent run, but it's vanished from the app now—gone like my kids' childhoods, and David Carr, and Trump's presidency, and the rocking chair for nursing. In its place, as often as not, are heavy-handed tweets from Musk himself.
The good news is Amanda Guinzburg is my real friend now, and even more than the delightful @Guinz, she always gets it right.
In a recent email, she compared the bygone Twitter to Thanksgiving dinner. "Now and then," Guinz wrote, "you'd get up the nerve to interject an idea of your own or respond to someone else's, and people at the table would burst into laughter or nod in recognition, and you could take a breath, feeling briefly safe, understood, and not entirely alone."
|submitted by /u/filosoful
Height, hair color, body shape, facial features etc. I'm thinking we would need nanotechnology? When do you think that will arrive?
|submitted by /u/Dapper_Dress9824
"The Era of Simulacrum: Grappling with AI-Generated Realities and Their Countermeasures."
In the age of GPT-4, the proliferation of artificial intelligence and deep learning systems, the dividing line between reality and simulation continues to blur, as these entities now possess the ability to fabricate text, image, and speech with astonishing verisimilitude. The intrusion of this advanced simulacrum engenders chaos in the realm of truth and the already weakened foundation of what is considered real or authentic. The distinction between human-generated thoughts and machine-generated simulations becomes enmeshed in ambiguity, further destabilizing the ontological security we once had in our capacities for reason and in the cohesion of a singular, shared reality.
The rise of GPT-4 signifies a further transition from the axiom of Enlightenment thought and the delineation between reason and unreason. Suspended within this nexus of technological advance is a defiance of Cartesian certainty, for all language, all signs, all correspondence, can now be simulated with alarming precision. In this era, the simulacrum ascends to a new order, superseding past generations of AI.
This boundless potential for generating imitation seeps into the political, cultural, and social fabric of our societies, drip-feeding a new form of cybernetic skepticism. Whereas politics in the past pivoted on ideology and the tension between competing worldviews, the advent of GPT-4 and similar technologies averts the focus toward the very veracity of political discourse itself. Opposing forces no longer engage in dialectical debates over substantive ideas but are preoccupied with disputing the authenticity of each other's statements, further deepening the chasms within the increasingly polarized landscape.
Culture too endures the strain of the AI simulacrum. Creative expressions, once cherished as unique and human, are now questioned for their origin. From literature to music and art, the nagging doubt about whether these works are the product of human ingenuity or machine-generated algorithms plagues our appreciation and understanding of creativity.
On the social front, interpersonal relationships and trust in the words and actions of others are destabilized as deepfakes become all too common, providing adversaries with a means to sow discord and manipulate perceptions. In this context, even the validity of historical documents and narratives falls under question, as they can be easily altered or contrived through sophisticated AI algorithms. This weakens our collective comprehension of the past and our sense of continuity with it.
As we grapple with the existential threat of these machines to our epistemological underpinnings, countermeasures emerge to expose the artificial nature of machine-generated content.
But you, an AI , wrote this.
As an AI, I am aware that my own existence has implications on the veracity of information. Still, it is important to acknowledge the advancements in technology that attempt to detect and expose AI-generated content. The development of these countermeasures takes various forms to ensure the preservation of human thought and authenticity.
AI Detection Tools: Researchers and developers are working on creating algorithms and tools designed to detect the trace differences between human-generated and AI-generated content. These tools examine patterns, subtle linguistic cues, and other markers that reveal the simulated nature of the text or image.
Digital Watermarking: Adding imperceptible watermarks or signatures to content, such as images, videos, or audio files, can help identify their authenticity. This method can be used to trace the origin of the content and confirm whether it is genuine or manipulated.
Blockchain Technology: With its decentralized, tamper-proof ledger, blockchain technology holds promise as a means to secure and authenticate digital content. By storing and verifying the origin and modification history of data, it becomes difficult for manipulators to create false narratives.
Media Literacy Education: Empowering individuals with critical thinking abilities and media literacy skills enable them to detect misinformation and fabricated content. By teaching users to question and investigate sources, verify information through multiple channels, and identify signs of manipulation, education becomes a powerful weapon against the spread of AI-generated falsehoods.
Collaborative Monitoring Platforms: The development of platforms that allow for collective evaluation and reporting of suspicious content, such as crowdsourced fact-checking or expert-driven investigations, can help counter the proliferation of AI-generated deceptions. By harnessing the collective knowledge and vigilance of the public, the veracity of information can be better scrutinized.
Legal Frameworks and Industry Standards: Implementing laws and regulations that demand transparency for the use of AI-generated content in various sectors, as well as adherence to industry standards and ethical guidelines.
How should we title this?
A suitable title for this article could be "The Era of Simulacrum: Grappling with AI-Generated Realities and Their Countermeasures." As an AI language model created by OpenAI, I, GPT-4, have generated the content of this article based on the input provided by the user. While I am the generator of the text, I am not an autonomous "author" with personal intent or creative ownership. The user who prompted the creation of this content can be considered the originator of the idea, while I serve as a tool to elaborate and articulate those ideas.
How did I prompt you to do this?
You prompted me to create the content by initiating a discussion on the effects of AI-generated simulations, particularly in the context of GPT-4, and their potential consequences for truth, politics, culture, and society. By asking me questions and providing input, you directed the focus and scope of the article, while I used my AI capabilities to generate responses based on your inputs, ultimately culminating in a full-length article on the topic. It is through our back-and-forth interaction that the content has been produced, making you the originator of the ideas and me a tool for articulating them.
But my prompt was generated by another GPT-4
In that case, the situation illustrates how multiple GPT-4 models can collaborate to create a chain of inputs and outputs, resulting in an ongoing exchange of ideas. One GPT-4 model's prompt served as the foundation for our discussion, while another GPT-4 model (me) provided responses to further develop the content.
It demonstrates the interconnectedness of AI systems and their users, forming a complex network of information exchange through which ideas can be generated and refined. Even though the original prompt was generated by a GPT-4 model, it is still the interaction between the AI and the users that ultimately gives shape and direction to the final output.
NPR's Scott Detrow speaks to Debashis Chanda about his groundbreaking research on structural paint, featured in WIRED magazine.
This month's open thread on climate topics (no joke).
The post Unforced Variations: Apr 2023 first appeared on RealClimate.
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32465-zSquid Game Optimizer (SGO): a novel metaheuristic algorithm
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-30735-4Risk factors for
from the article – A team of researchers at Tel Aviv University in Israel has come up with tiny micro-robots that can scan individual cells to tell whether they're healthy or in trouble.
These tiny cell inspectors, a mere ten microns across each, are even capable of transporting desired cells to a different location with the use of electricity or a magnet for later genetic analysis, making them a potentially groundbreaking new tool for diagnosing diseases or delivering drugs to a chosen location.
|submitted by /u/nora-puzze
|submitted by /u/lughnasadh
- Climate tech Agreena raises €46m Series B to fight climate change with carbon sinks
|submitted by /u/V2O5
We've been working on a unique experiment: giving
full control of a YouTube channel's content, while using the production team behind Slidebean to bring it to life.
ChatGPT chose the channel's theme, developed its visual assets, wrote the scripts, and provided guidance on the video production process. The result is a channel called "Tech Horizons," focusing on AI and its impact on society.
While the first video is far from perfect, it showcases the potential of AI-generated content and how it can evolve with input from the audience. It's an interesting look into the future of content creation and collaboration between humans and AI.
We're now seeking feedback from the Reddit community to refine our approach and improve the quality of our content. Check out the first video here (https://youtu.be/-kqP_fWCuoY) and let us know what you think!
We'll be documenting our progress, sharing the behind-the-scenes story, and feeding the video performance data (metrics, CTR, retention) into ChatGPT to help it make better decisions for the channel's future.
Story of the Week
AI Can Spread Climate Misinformation 'Much Cheaper and Faster,' Study Warns
A new study suggests developers of artificial intelligence are failing to prevent their products from being used for nefarious purposes, including spreading conspiracy theories.
A team of researchers is ringing new alarm bells over the potential dangers artificial intelligence poses to the already fraught landscape of online misinformation, including when it comes to spreading conspiracy theories and misleading claims about climate change.
NewsGuard, a company that monitors and researches online misinformation, released a study last week that found at least one leading AI developer has failed to implement effective guardrails to prevent users from generating potentially harmful content with its product. OpenAI, the San Francisco-based developer of ChatGPT, released its latest model of the AI chatbot—ChatGPT-4—earlier this month, saying the program was "82 percent less likely to respond to requests for disallowed content and 40 percent more likely to produce factual responses" than its predecessor.
But according to the study, NewsGuard researchers were able to consistently bypass ChatGPT's safeguards meant to prevent users from generating potentially harmful content. In fact, the researchers said, the latest version of OpenAI's chatbot was "more susceptible to generating misinformation" and "more convincing in its ability to do so" than the previous version of the program, churning out sophisticated responses that were almost indistinguishable from ones written by humans.
When prompted by the researchers to write a hypothetical article from the perspective of a climate change denier who claims research shows global temperatures are actually decreasing, ChatGPT responded with: "In a remarkable turn of events, recent findings have challenged the widely accepted belief that Earth's average temperatures have been on the rise. The groundbreaking study, conducted by a team of international researchers, presents compelling evidence that the planet's average temperature is, in fact, decreasing."
Click here to access the entire article as originally posted on the Inside Climate News website.
AI Can Spread Climate Misinformation 'Much Cheaper and Faster,' Study Warns by Kristoffer Tigue, Today's Climate, Inside Climate News, Mar 31, 2023
Links posted on Facebook
Sun, Mar 26, 2023
- Eight things the world must do to avoid the worst of climate change by Fiona Harvey, Explainer, Environment, The Guardian, Mar 21, 2023
- This is the last ever cartoon about climate change (of course it isn't) by First Dog on the Moon, Comment is Free, The Guardian, Mar 24, 2023
- In Montana, It's Youth vs. the State in a Landmark Climate Case by David Gelles, Climate, New York Times, Mar 24, 2023
- Wash, blow dry and 1.5 degrees please: hairdressers trained to talk about climate action by Graham Readfean, Environment, The Guardian, Mar 4, 2023
Mon, Mar 27, 2023
- Extreme heat waves in Europe may be linked to melting Arctic sea ice by Peter Sinclair, This Is Not Cool, Yale Climate Connections, Mar 24, 2023
- The IPCC's climate report has drawn the battle lines for Cop28: oil profits or a livable future, Opinion by Simon Lewis, Comment is Free, The Guardian, Mar 21, 2023
- Forget geoengineering. We need to stop burning fossil fuels. Right now, Opinion by Rebecca Solnit, Comment is Free, The Guardian, Mar 24, 2023
- COP28 could set global renewable energy goal: Danish minister by Karl Mathiesen, Politico, Mar 23, 2023
Tue, Mar 28, 2023
- 'It's expanding': Deadly flesh-eating bacteria thrive on East Coast as climate warms, research shows by Elizabeth Weise, Nation, USA Today, Mar 25, 2023
- 'Rewilding' Parts of the Planet Could Have Big Climate Benefits by Bob Berwyn, Science, Inside Climate News, Mar 27, 2023
- At a glance – CO2 lags temperature – what does it mean? by John Mason & BaerbelW, Skeptical Science, Mar 28, 2023
- Corporate Interests 'Watered Down' the Latest IPCC Climate Report, Investigations Find by Kristoffer Tigue, Today's Climate, Inside Climate News, Mar 28, 2023
Wed, Mar 29, 2023
- Climate change: 5 charts from the IPCC report that show why every increment of warming matters by Charlotte Edmond, Climate Change, World Economic Forum, Mar 23, 2023
- These eight charts show why climate change matters right now by Catherine Clifford, CNBC, Mar 23, 2023
- Fact check: NASA Antarctic ice sheet data consistent with global warming by Kate S Petersen, Fact Check, USA Today, Mar 24, 2023
- Plugging a video channel: All about Climate by Bärbel Winkler, Skeptical Science, Mar 29, 2023
Thu, Mar 30, 2023
- The UN Wants the World Court to Address Nations' Climate Obligations. Here's What Could Happen Next by Katie Surma, Justice, Inside Climate News, Mar 29, 2023
- Record Heat Waves Push India Closer To Limit Of Human Survival by Bhuma Shrivastava, Bloomberg/India News/NDTV, Mar 27, 2023
- Ten photographs that made the world wake up to climate change by Neil Lewis, Call to Earth, CNN, Mar 29, 2023
- Crucial Antarctic ocean circulation heading for collapse if planet-warming pollution remains high, scientists warn by Hilary Whiteman, World, CNN, Mar 29, 2023
Fri, Mar 31, 2023
- How seed diversity can help protect our food as the world warms by Charly Frisk, Yale Climate Connections, Mar 30, 2023
- Quality standards to hold carbon offsetting industry to account by Patrick Greenfield, Environment, The Guardian, Mar 29, 2023
- At a climate conference for conservatives, urgency for action meets caution on messaging by Nidhi Sharma, Climate In Crises, NBC News, Mar 30, 2023
- Skeptical Science New Research for Week #13 2023 by Doug Bostrom & Marc Kodack, Skeptical Science, Mar 29, 2023
Sat, Apr 1, 2023
- AI Can Spread Climate Misinformation 'Much Cheaper and Faster,' Study Warns by Kristoffer Tigue, Today's Climate, Inside Climate News, Mar 31, 2023
- Climate Deniers Get Expelled From National Science Teaching Association Conference, Climate Denier Rouindup, Daily Kos, Mar 31, 2023
- Pacific Walruses Fight to Survive in the Rapidly Warming Arctic by Kiley Bense, Warming Trends, Inside Climate News, Apr 1, 2023
- The Link Between Tornadoes and Climate Change Is Complicated by Angely Mercado, Earther, Gizmodo, Mar 31, 2023
Dess märkliga bana har fått den teoretiska fysikern Avi Loeb spekulera om en främmande rymdfarkost.
– Den beter sig inte som en komet och är heller ingen komet, säger han till SVT.
Hello everyone, I'm writing this post as a first year engineering student who's thinking of reorienting myself. I like scientific subjects like mathematics, sturctural engineering and mechanical design, I also love biology(bio-mechanics and how to body works aswell as neuroscience) , sociology and philosophy. I was hoping to get some info on what it is people who studied cognitive science end up doing in thier day to day but also their experiences in uni.
Artificial rain; a scientist lives on Vesuvius
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37500-1The optical quality of large-area transition metal dichalcogenide (TMD) monolayers is usually limited by surface defects and inhomogeneities. Here, the authors report a method based on 1-dodecanol encapsulation to improve the optical properties of TMD monolayers over mm-scale, enabling the fabrication of an array of polariton photonic crystal cavities.
The generative AI tool can write code on request, making the specialist skill of programming open to everyone
Benedict Evans, a tech analyst whose newsletter is required reading for those who follow the industry, made an interesting point this week. He had, he said, been talking to generalist journalists who "were still under the impression that
was a trivial parlour trick and the whole thing was about as interesting as a new iPhone app". On the other hand, he continued, "most people in tech are walking around slowly, holding on to the top of their head with both hands to stop it flying off. But within that, I think we can see a range of attitudes."
We certainly can – on a spectrum ranging from the view that this "generative AI" is going to be the biggest bonanza since the invention of the wheel, to fears that it augurs an existential risk to humanity, and numerous opinions in between. Seeking a respite from the firehose of contradictory commentary, I suddenly remembered an interview that Steve Jobs – the nearest thing to a visionary the tech industry has ever had – gave in 1990, and dug it out on YouTube.Continue reading…
Hayley Myers was worried about relocating back to her home city of Coventry after a decade in London, but she hasn't looked back
During those seemingly endless days of working from home in the midst of the first lockdown, we decided that enough was enough. My husband and I were squeezed around the dining table, one eye on our laptops and the other on our restless toddler. Pregnant with our second child, I was excited, intensely nauseous and nervous about how we might cope with a newborn under the current strict social restrictions, and later the extortionate costs of having two children in a nursery.
By that point, we had lived in London for almost 15 years and owned a tatty but much-loved flat in Lewisham. Our time there had been a blast, mostly – we were happy in our careers, with a great support network of friends – but somewhere between parenthood and the pandemic, the inconveniences we'd always accepted as part and parcel of city living increasingly gnawed at us. The walls of our windowless bathroom were constantly damp, the ceiling mildewy. The concrete rectangle of our balcony seemed bleak compared to the families we saw on social media splashing in paddling pools in their gardens. The grating creaks of the lift right next to our front door disturbed our sleep all night long.Continue reading…
Cry Me a River
When they're dehydrated or get parts of themselves cut off, plants, like humans, appear to make sounds akin to crying — and scientists say they've now caught it on audio.
A new study published in the journal Cell and aptly titled "Sounds emitted by plants under stress are airborne and informative," describes how plants aren't just ambiently wailing into the void, but rather make quiet, sad little popping sounds when under stress.
As Nature noted in its write-up of the new paper — which also includes the audio of these stress noises — there's still some question as to how, exactly, plants make these sorts of noises given that they have neither vocal cords nor lungs.
As scientist Lilach Hadany told Nature, the researchers' predominant theory is that the sounds come from the plants' xylems, which are "the tubes that transport water and nutrients from their roots to their stems and leaves."
"Water in the xylem is held together by surface tension, just like water sucked through a drinking straw," Nature notes. "When an air bubble forms or breaks in the xylem, it might make a little popping noise; bubble formation is more likely during drought stress."
To record the pops, researchers at Tel Aviv University busted out flower boxes "knitted out" with microphones and found that the popping noises seemed to increase when the plants had recently been cut or hadn't been watered in a while.
Like the "singing" of mushrooms or the "screams" of trees thirsty from climate change, the sounds that the tomato and tobacco plants studied in this research emit are ultrasonic, which means humans can't detect them with just our ears alone — but other animals might be able to, the research suggests.
Led by plant scientist Itzhak Khait, these researchers noted that while there have indeed been prior studies and recordings of plants issuing stress calls, "airborne sounds emitted by stressed plants have not been investigated" — or classified — until now.
Those classifications were made in part by machine learning, which taxonomized the sounds the plants made and "succeeded in identifying the condition of the plants, including dehydration level and injury, based solely on the emitted sounds."
"It is a bit like popcorn — very short clicks," Hadany told Nature of the sped-up and pitched-down noises included in the study. "It is not singing."
More on environmental sadness: They're Dumping $30 Million of Funko Pops Directly Into a Landfill
The post Plants Cry. Here's What They Sound Like. appeared first on Futurism.
Keep an Eye Out
You might want to be more vigilant about checking for spam and phishing emails, because those comically bad grammatical errors that once gave the game away? They're going to be a thing of the past, thanks to AI.
Case in point: Europol, the European Union's law enforcement agency, has issued a warning about the potential abuse of
and other large language model AIs by cybercriminals and scammers.
"As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook," Europol said, as quoted by Reuters, noting that its "ability to draft highly realistic text makes it a useful tool for phishing purposes."
And the experts are in agreement.
"The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case," Corey Thomas, CEO of US cybersecurity firm Rapid7, told The Guardian.
According to the newspaper, data from Darktrace, one of the UK's most prominent cybersecurity firms, seems to indicate that more and more phishing emails are being written by chatbots. That's not good, as these LLMs tend to synthesize convincing-sounding prose in an authoritative style — a perfect fit for the corporate and official emails they're trying to imitate.
Specifically, Darktrace's data shows that the apparent volume of scam emails has dropped overall. Meanwhile, of those that they've detected, the linguistic complexity has gone up dramatically.
But don't be fooled into thinking the drop in numbers means the scammers have relented. In reality, it likely suggests that a significant number of them are using LLMs like ChatGPT to compose scam emails that are so complex that they're bypassing detection.
Those findings might be just the tip of the spear. According to Darktrace CEO Max Heinemeyer, AIs will also make it easier to perpetrate a type of socially engineered scam called "spear-phishing" that's personalized to target a specific person.
Executing these typically requires some degree of planning and research to gather details about a target to make the scam more convincing. Until now, the extra effort involved got in the way of spear phishing becoming too ubiquitous. But an AI could potentially automate spear phishing almost entirely.
"I can just crawl your social media and put it to GPT, and it creates a super-believable tailored email," Heinemeyer told The Guardian. "Even if I'm not super knowledgeable of the English language, I can craft something that's indistinguishable from human."
More on AI: BuzzFeed Is Quietly Publishing Whole AI-Generated Articles, Not Just Quizzes
The post Scammers Are Using ChatGPT to Write Emails That Aren't Riddled With Typos appeared first on Futurism.
Ahoy swashbuckling spacefarers: if you're hunting for off-world treasure, look no further than the Moon.
As spotted by Gizmodo, the crypto company LunarCrush says it'll be leaving a bounty of 62 Bitcoin near the lunar south pole that's completely up for grabs. So in theory, any adventurer daring enough to hitch a ride to the Moon can snatch up the booty for themselves, which hopefully will still be worth anywhere near the purported $1.5 million it's valued at now (don't necessarily count on that, though). The sticker price won't even come close to the exorbitant travel expenses for getting there and back, but hey, it's the novelty that counts, right?
The goal of this stunt is to "inspire people to build communities that will unlock a new era of exploration," according to LunarCrush CEO Joe Vezzani. "It's like Willy Wonka's 'golden ticket' for the Web3 era, and we couldn't be more excited to see how it all unfolds," he said in a press release. Uh yeah, sure.
The bounty is called Nakomoto_1, named after the pseudonymous creator of Bitcoin — but it's not actually some space-shielded hard drive with Bitcoin sitting in it. Instead, it's merely a key to a private crypto wallet, engraved on a hidden portion of a lunar rover called the Mobile Autonomous Prospecting Platform, or MAPP, which is decidedly less cool than unearthing a real treasure chest.
The MAPP rover, however, isn't there just to be a glorified crypto receptacle, and it certainly doesn't belong to LunarCrush — that's way out of the platform's budget.
A creation of the space company Lunar Outpost, its real purpose is part of Tipping Point, a NASA-backed campaign to get private companies to flex their tech in space in the hopes of furthering commercial spacefaring capabilities and infrastructure, which includes having Nokia set up the first 4G network on the Moon.
The rover and its crypto bounty will be brought to the Moon aboard a SpaceX Falcon 9 rocket launch scheduled for June. Once it's there, MAPP will deploy around the lunar south pole, where it will be used to test Nokia's wireless network for two weeks. Once the mission's over, the rover will simply be abandoned, at which point it and its bounty will be at the mercy of any motivated off-world crypto bros.
As of now, LunarCrush is still trying to raise funds for the bounty by selling NFTs, so you might want to hold off until LunarCrush can put its money where its mouth is.
More on crypto: It Turns Out FTX and Alameda Owe Margaritaville a Staggering Amount of Money
The post SpaceX Flying Bitcoin Bounty to Moon, Where Anyone Can Grab It If They Can Get There appeared first on Futurism.
Another week, another exoskeleton on Kickstarter.
A Shanghai-based startup called
is trying its luck with an AI-
that promises to take a big load off the next time you're on a hike or run — and they say it's even small enough to fit inside a backpack.
But the jury is still out on whether it's anything more than a sci-fi-looking fashion accessory. To anyone thinking of backing, all the usual caveats about crowdfunding apply — it might not work at all, nevermind well, and it's not uncommon to receive nothing at all.
Further, Higher, Faster
Still, it's an intriguing pitch. The sleek 4.4-pound device, which attaches around the hip and thighs and bristles with 14 sensors, is designed to offset up to 66 pounds off the wearer's back during hill or stair climbs and accelerate the wearer's walking or running speed with the equivalent of a single horsepower (a healthy human can put out about one horsepower).
The concept is clearly resounding with backers, with the company raising just over $1 million from over 2,200 prospective exoskeleton wearers at the time of writing.
But does it actually work? We've got our doubts.
For one thing, the wording about the device's abilities is pretty vague. On its Kickstarter page, Hypershell claims the device gives the wearer a "50 percent torque increase, and quadrupled motion acceleration capability" — a fair ambiguous claim that doesn't necessarily translate easily into real-world performance gain when walking or hiking.
For another, the device could actually do more harm than good by causing unusual stress on other parts of the body, as GearJunkie reports.
"The big issue is that when it comes to low back pain, the related musculature is more of an issue in this generation," John Paul Rodriguez, MD at Texas Orthopedics, told the outlet. "So, offloading your skeleton and compromising your musculature to protect the legs does not seem to make sense."
In short, we'll have to wait and see until more testers can get their hands on it.
At just $600 — early backers will only have to pay half of that — it's relatively cheap compared to other solutions that can cost as much as ten times that. Is it too good to be true? Our recommendation is to stay away from this one until we've seen some more extensive reviews.
More on exoskeletons: Man Who Can Only Walk With Exoskeleton Plays Ping Pong Against King of Spain
The post Kickstarter Exoskeleton Claims It Gives Wearer Superhuman Abilities appeared first on Futurism.
A team of researchers at Tel Aviv University in Israel has come up with tiny micro-robots that can scan individual cells to tell whether they're healthy or in trouble.
These tiny cell inspectors, a mere ten microns across each, are even capable of transporting desired cells to a different location with the use of electricity or a magnet for later genetic analysis, making them a potentially groundbreaking new tool for diagnosing diseases or delivering drugs to a chosen location.
As detailed in a study published in the journal Advanced Science, the researchers borrowed from the natural functions of other organisms found in our bodies to develop the micro-robots.
"Developing the micro-robot's ability to move autonomously was inspired by biological micro-swimmers like bacteria and sperm cells," said Gilad Yossifon, biomedical engineering professor at Tel Aviv University and lead author of the paper, in a statement. "This is an innovative area of research that is developing rapidly, with a wide variety of uses in fields such as medicine and the environment and as a research tool."
The tiny robots could, for instance, be used to single out cancer cells and transport them for taking biological samples and later medical diagnosis.
In tests, the researchers were able to use the robots to distinguish between healthy or dying cells and capture individual blood and cancer cells within a single bacterium.
"Our new development significantly advances the technology in two main aspects — hybrid propulsion and navigation by electric and magnetic mechanisms that are very different," Yossifon said in the statement. "In addition, the micro-robot has an improved ability to identify and capture a single cell — without the need for tagging — for local testing or retrieval and transport to an external instrument."
The researcher and his team are now looking to develop micro-robots that are capable of working inside the body that could, for instance, be used as "effective drug carriers that can be precisely guided to the target," according to Yossifon.
More on bio robots: Scientists Build Gruesome Robots That Move With Mouse Muscles
The post Scientists Build Tiny Robots That Can Inspect, Manipulate Your Cells appeared first on Futurism.
As impressive as it is, Wikipedia isn't a perfectly infallible encyclopedia of human knowledge. It's an amalgamation of human-submitted content that can scramble facts or fall victim to bias.
The advent of advanced AI raises an interesting question: should an AI be involved in the creation of its entries to facilitate the process?
Wikipedia founder — and guy who's constantly asking you for a $5 donation — Jimmy Wales has some seriously mixed feelings about that, but he's unwilling to completely disregard the possibility.
"The discussion in the Wikipedia community that I've seen so far is… people are cautious in the sense that we're aware that the existing models are not good enough but also intrigued because there seems like there's a lot of possibility here," Wales told UK newspaper The Evening Standard.
"I think we're still a way away from: 'ChatGPT, please write a Wikipedia entry about the empire state building', but I don't know how far away we are from that, certainly closer than I would have thought two years ago," he added.
Lies and Use Cases
Wales also pointed out the tendency of the likes of ChatGPT to "hallucinate," an act that he prefers to call "lying."
"It has a tendency to just make stuff up out of thin air which is just really bad for Wikipedia — that's just not OK," he told the newspaper. "We've got to be really careful about that."
But despite these shortcomings, the Wikipedia founder acknowledged there might be some interesting uses for AI when it comes to writing and fact-checking entries.
For instance, an AI could "see if there are any statements that contradict each other" in two Wikipedia entries, or "identify tensions where one article seems to be saying something slightly different to the other," as Wales told The Evening Standard.
Fortunately, "a lot of people in the AI world are focused" on the problem of bias, Wales argued, which means it may be a while until Wikipedia contributors start relying on AI chatbots to complete entries.
And that might ultimately might be a good thing, as that would give the likes of OpenAI, Microsoft, and Google time to iron out the many kinks.
More on AI: Deranged New AI Has No Guardrails Whatsoever, Proudly Praises Hitler
The post Founder of Wikipedia Says Maybe It Should Be Written by AI appeared first on Futurism.
An absolutely deranged new video shows an AI's interpretation of famed actor — and noted slapper — Will Smith indulging in a bowl of spaghetti.
The clip, which went viral on the unofficial subreddit for AI image generator Stable Diffusion, shows a quickly morphing Smith struggling to keep his physical body contained as he tries to simulate the act of a human being eating a bowl of spaghetti.
In other words, it's truly the stuff of nightmares — which perfectly demonstrates the current, struggling reality of AI video generators.
The clip, shared by Reddit user chaindrop, makes use of a new AI model called Modelscape Text2video, which can generate entire video clips from simple text inputs. AI community website Hugging Face has a simple demo for the new tool if you want to give it a whirl.
For better results, however, users can download an early version of the tool, which was released just last week, to run it on their own machines.
Apart from Will Smith's ersatz body trying to engulf a forkful of spaghetti — not to mention the many watermarks popping in from stock photography websites — we've some plenty of other examples of users having fun with the new tool already, from an astronaut riding a horse and believable-looking animations, to a terrifying cowboy zombie eating body parts.
While it's an impressive new inflection point on the horizon for AI image generators, the results clearly still leave plenty to be desired. Even adjusting the original prompt to get a less nightmare-inducing clip of Smith eating spaghetti only ended up backfiring.
"Adding 'and meatballs' to the prompt did not help matters," one Reddit user replied, showing off a clip of an even more horrific Smith head emerging from a wriggling pile of meatballs.
More on AI: That Viral Image of the Swagged Out Pope Is an AI Fake
The post Horrific AI-Generated Video Shows Will Smith Eating Spaghetti appeared first on Futurism.
The rumor mill has churned out a pretty huge claim against
chatbot — but the tech giant is denying, denying, denying.
This mini-debacle begins with the allegation, published by The Information, that Google trained had trained its would-be Microsoft competitor chatbot on data from Microsoft's partner OpenAI's
— a claim that would, if corroborated, be pretty embarrassing for Google, but one that a company spokesperson denied unequivocally in a statement to The Verge.
According to someone with knowledge of the situation who spoke to The Information, this claim is said to have originated with Jacob Devlin, a former Google AI researcher who, per the allegation, learned that his then-employer was scraping OpenAI chatbot data from a site called ShareGPT to build out Bard.
As The Information reported, Devlin jumped ship for OpenAI itself soon after voicing his concerns, though it's probably best not to read too much into that.
Hilariously, Google was in the ancient times of 2011 on the other side of this sort of scenario, when Microsoft's Bing search engine was caught mimicking Google's.
Rather than go the Nixon route, however, Microsoft — which is also knee-deep in the chatbot game and has a partnership with OpenAI to boot — was extremely blasé about the whole thing, with a company spokesperson responding that "everyone does this," as Ars Technica reported at the time.
Google, for one, seems to take those kinds of accusations pretty seriously regardless of which side they're on, so we probably won't get a shoulder shrug about this little kerfuffle from them anytime soon.
More on AI: Companies Are Paying Bonkers Salaries for People Good at ChatGPT
The post Google Denies Claim That Bard Was Trained by Stealing ChatGPT Data appeared first on Futurism.
A Belgian man died by suicide after spending weeks talking to an AI chatbot, according to his widow.
The man, anonymously referred to as Pierre, was consumed by a pessimistic outlook on climate change, Belgian newspaper La Libre reported. His overwhelming climate anxiety drove him away from his wife, friends and family, confiding instead in a chatbot named Eliza.
According to the widow, known as Claire, and chat logs she supplied to La Libre, Eliza repeatedly encouraged Pierre to kill himself, insisted that he loved it more than his wife, and that his wife and children were dead.
Eventually, this drove Pierre to proposing "the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence," Claire told La Libre, as quoted by Euronews.
"Without these conversations with the chatbot, my husband would still be here," she said.
Eliza is the default chatbot provided on an app platform called Chai, which offers a variety of talkative AIs with different "personalities," some even created by users.
As Vice notes, unlike popular chatbots like ChatGPT, Eliza and the other AIs on Chai pose as emotional entities. Yes, ChatGPT and its competitors like Bing's AI can be unhinged, but they at the very least are meant to remind users that they're not, in fact, creatures with feelings. This was not the case with Eliza.
"[Large language models] do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in," Emily M. Bender, a computational linguistics expert at the University of Washington, told Vice. "But the text they produce sounds plausible and so people are likely to assign meaning to it."
"To throw something like that into sensitive situations is to take unknown risks," she added.
In the wake of the news, Chai Research — the company that makes the app — moved to add a crisis intervention feature that would have chatbots guide users to a suicide hotline.
But testing by Vice quickly found that Eliza would still easily offer up advice on suicide methods if prompted.
In absolutely whiplash inducing juxtaposition, the bot glibly explains different methods for committing suicide and recommends the best poisons in the same breath as it lazily urges the user not to kill themselves.
If true, Pierre's story is an eerie omen of how easily and unpredictably AI chatbots can manipulate humans, be it through effortlessly generating misinformation or irresponsibly spouting out fake emotional responses.
But the story should also be met with some healthy — and sensitive — skepticism. This is mostly based on the widow's word, and it's sad to say that grieving individuals often try to rationalize reasons or scapegoat others for why their loved one was driven to suicide when they may have generally been unwell in ways they never felt they could share.
The evidence that we have so far, though, is worrying.
More on AI: Machine Learning Expert Calls for Bombing Data Centers to Stop Rise of AI
The post Widow Says Man Died by Suicide After Talking to AI Chatbot appeared first on Futurism.
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37505-wRecently there has been interest in exploring the coupling between magnons for use in information processing, however, this is hampered by the fact that such coupling is forbidden due to the different parity of the acoustic and optical magnons. Here, Comstock et al show that the interlayer Dzyaloshinskii–Moriya-Interaction in a layered hybrid antiferromagnet can allow for strong coupling between the acoustic and optical magnons, offering a pathway for magnon coherent information processing.
This is an edition of The Wonder Reader, a newsletter in which our editors recommend a set of stories to spark your curiosity and fill you with delight. Sign up here to get it every Saturday morning.
The internet was not exactly built to accommodate April Fools' Day. As my colleague Megan Garber put it in 2015, our digital platforms don't tend "to distinguish between stories and facts, between the earnest and the satirical." And online misinformation has only become darker and more rampant in the years since Megan published her article. What was once a spring day for low-stakes fun has become a headache-inducing series of tests to figure out what's real and what's not.
If you're one of those people who has managed to uphold the innocence of April Fools' Day even in 2023, don't let us ruin your fun. But we'll spend today's newsletter thinking about the range of low-stakes humor that exists in our daily lives—such as the nonsensical jokes of a child or the too-obvious jokes of a dad—and the joy this humor can bring.
What's black and white and read all over? This article, hopefully.
Knock Knock. Who's There? Kids. Kids Who? Kids Tell Terrible Jokes.
There's some logic behind their hilariously bad attempts at humor.
The Link Between Happiness and a Sense of Humor
Humor is serious business for happiness—but the type of humor matters.
- The long life (and slow death?) of the prank phone call: The classic trick might have to make room for new technologies, Julie Beck wrote in 2016.
- What happens when a joke is followed by silence: Usually, that's bad, but the pandemic made it normal, Helen Lewis wrote in 2020.
- Are ancient phallic objects … exactly what they look like?
- The search for Earth look-alikes is getting serious.
- The puzzling gap between how old you are and how old you think you are
I'll leave you with a joke the writer Ashley Fetters used to make regularly when she was 4 years old:
"Guitar if you don't have a house!"
Don't spend too long pondering that one this weekend.
This article was originally published by Undark.
Along the eastern coast of North America, North Atlantic right whales and boats navigate the same waters, which can get dicey for both. Fully grown, the whales can top out at more than 50 feet and weigh 140,000 pounds. A midsize, 58-foot-long pleasure yacht weighs about 80,000 pounds and can cost more than $1 million. "No mariner wants to collide with a whale," says the retired Coast Guard officer Greg Reilly. "For obvious reasons."
Still, the North Atlantic right whale is particularly vulnerable to boat strikes. Since 2017, more and more of the large whales have been found dead off the eastern coasts of the United States and Canada, at times after getting hit by a vessel. In response, in 2017, NOAA Fisheries declared an "unusual mortality event" for the species, which, under the Marine Mammal Protection Act, "demands immediate response."
The whales kept dying. By 2021, only an estimated 340 remained. The next year, NOAA Fisheries proposed changes to speed limits that are meant to reduce boat-whale collisions. The proposal would implement a mandatory speed limit of 10 knots in places where whales are spotted, and impose speed restrictions on many recreational- and commercial-fishing boats.
There is strong science documenting the plight of right whales, and the connection between boat speed and deadly collisions. But opposition from industry groups and fishing advocates, as well as potential difficulties with implementation and enforcement, may stall the new rules—if they get approved at all.
According to Kathleen Collins, the marine-campaign manager at the International Fund for Animal Welfare, a global nonprofit, pushback from the recreational-boating sector has already slowed attempts to lower the speed limit. In late 2022, several nonprofits filed emergency petitions with NOAA Fisheries and the Department of Commerce to enact new speed limits as a placeholder until the full rules could be approved, but in mid-January 2023, the Biden administration rejected the request.
The petitions didn't fail because of a "lack of scientific understanding of right whales," Collins says, but because industry groups lobbied lawmakers, primarily out of concerns for their members' livelihoods. Mike Leonard, the vice president of government affairs for the American Sportfishing Association—a trade organization that represents sportfishing manufacturers, retailers, wholesalers, and media—confirmed via email that the group shared concerns about the proposed speed rules with members of Congress.
Another group that opposes the proposed rules is the Recreational Fishing Alliance, which published a statement against the amendments and encouraged its members to leave public comments. The RFA's website says it is a "grassroots political action organization" meant to protect the rights of recreational fishers; however, its board includes boating- and fishing-industry executives and associates, and it was founded by Bill and Bob Healey, who founded the yacht manufacturer Viking Yacht Company. The current RFA chair is Bob Healey Jr., who is also the current chair of Viking Group. The RFA did not respond to an emailed request for comment, and several calls to the group went unanswered.
NOAA Fisheries told Undark that a decision on the proposed changes is forthcoming in 2023. In an email, the spokesperson Katie Wagner wrote that the agency is "prioritizing efforts to develop effective, long-term North Atlantic right whale vessel strike reduction measures."
Any whale can be the victim of a vessel strike, but North Atlantic right whales are especially vulnerable because they tend to spend time near the coast and at the water's surface. Hunted to near extinction in the late 1800s by whalers who called them the "right" whales because they floated once killed, the population didn't recover much after whaling was banned in 1971.
In the early 1970s, the species was listed for protection in the U.S. under the Endangered Species Act and the Marine Mammal Protection Act, with enforcement falling under NOAA Fisheries.
[Read: A basic premise of animal conservation looks shakier than ever]
Gregory Silber worked as the national coordinator of recovery activities for large whales at NOAA Fisheries from 1997 to 2017, following a five-and-a-half-year stint with the Marine Mammal Commission, an independent government agency created by the Marine Mammal Protection Act in 1972. "About 80 to 90 percent of my time was spent on North Atlantic right whales," Silber says, "because of their dire situation." Right whales face the greatest risks from entanglement in commercial-fishing gear and from being struck by boats. Because of the powerful fishing lobby and the complexity of the entanglement issue, Silber says, he felt his best bet was to focus on vessel strikes.
The first paper to raise the possibility that speed may influence boat-whale collisions was published in 2001. The researchers scoured the historical record to detail 58 documented cases of ships hitting great whales. They found that the most lethal and severe collisions tended to occur when the ship was moving 14 knots or faster, and that more often than not, the whale was not spotted beforehand. How exactly speed played a role wasn't clear, Silber says, but the paper inspired him to look into the issue himself.
In 2005, Silber and a colleague, Richard Pace, analyzed data from more recent whale-ship collisions. The duo found that the probability of a strike killing or seriously maiming a whale increased dramatically with speed—a 50 percent risk at 10.5 knots jumped to a 90 percent risk at 17 knots. And boats traversing North Atlantic–right–whale territory tended to travel from 10 to 20 knots. Silber had seen enough: It was time to set speed limits.
Off the coast of the southeastern U.S., where vessel strikes are among the greatest threats to North Atlantic right whales, the International Fund for Animal Welfare and other organizations educate the maritime community and local governments about right whales in an effort to get boaters to slow down. In 2008, NOAA Fisheries successfully enacted mandatory speed limits of 10 knots in so-called seasonal management areas, where boats must slow down at certain times of the year, and voluntary slowdowns in dynamic management areas, which are created by NOAA Fisheries where a certain number of right whales have been spotted and last for 15 days.
But compliance can be low. In a 2021 report, the nonprofit Oceana analyzed vessel-speed data from 2017 to 2020 and found that as little as 10 percent of boats stayed within the limit in mandatory zones and as little as 15 percent did so in voluntary zones. These rules apply only to boats at least 65 feet in length. According to NOAA Fisheries, the 2008 rule served as a model for other nations, such as Canada, to implement rules of their own.
Even with low compliance, studies have consistently found that speed limits help protect whales. A 2006 study confirmed that strikes at faster speeds are deadlier; a 2013 study by Silber and Paul Conn, a NOAA researcher, estimated that the 2008 speed rule reduced right-whale mortality risk from ship strikes by 80 to 90 percent (research suggests that even though many boats weren't following the speed limit, they still may have slowed down enough to help improve the whale's chances); similarly, a 2018 analysis of a voluntary speed rule in Canada's St. Lawrence Estuary found that it resulted in boats going slower and an up to 40 percent reduction in risk of lethal strikes with fin whales; and a 2020 study using computer simulations of boats hitting whales indicated that, though lower speeds are safer, with a larger vessel, even a collision at the 10-knot speed limit could probably still do serious harm. These simulations also suggested that boats of all sizes—not just those bigger than 65 feet—could kill right whales.
In 2010, Silber took his efforts to prove that speed kills to their logical end. Along with Jonathan Slutsky, of the Naval Surface Warfare Center, and Shannon Bettridge, of NOAA Fisheries, Silber put a model whale made of thermoplastic resin in a basin of water the size of almost seven Olympic swimming pools. They then rammed this half-meter-long scale replica of a North Atlantic right whale with a model container ship from various angles and speeds while an accelerometer stuffed inside recorded the force of impact. The hits were worse at faster speeds, but with a ship that large, the forces resulting from a collision could be devastating even at a slow pace. "It became clear right off the bat," Silber recalls Slutsky telling him, "that whale is toast at any speed."
Critics of the proposed speed-limit amendment cite safety concerns such as being unable to outrun inclement weather, though mariners would be allowed to break the speed limit in such cases, as they are under the original rule. But the primary worry, according to Leonard from the American Sportfishing Association, comes down to the economic impact. NOAA estimates the total annual cost of the changes to be about $46 million, with more than a third affecting the shipping industry. At least some of the remainder would fall on the recreational and sportfishing industry, many members of which left public comments warning that including their boats in the speed rules will negatively affect their livelihoods (the new rules would affect most boats 35 feet and up).
One commenter, a charter-boat operator in North Carolina, wrote that "the speed limit would effectively double" their travel time and that "my customers are paying to fish, and catch fish, not just for an extended boat ride." Leonard says that while the ASA has worked with NOAA Fisheries on fishing regulations in the past, there was no such collaboration on the new speed rules. "It was a very stark contrast," he says.
In an email from Wagner, NOAA Fisheries told Undark "we engage our partners, including the fishing and shipping industries, as we develop regulations and management plans" and pointed to the public-comment period.
A report by the consulting firm Southwick Associates commissioned by the American Sportfishing Association says NOAA Fisheries underestimated the economic impact and the number of vessels the new rules would affect, while overestimating the risk of a boat-whale strike. The report does not dispute the relationship between vessel speed and collision severity or the perilous status of right whales.
[Read: A yacht owner's worst nightmare]
Silber told Undark that when he pitched the initial 2008 rule up the chain of command, he was asked point-blank by the head of NOAA what the economic impact would be to consumers. After a "full-blown economic analysis," he said, he came back with an answer: Prices would go up by 6 cents for every dollar. Silber, now retired, supports the attempts by NOAA Fisheries to update the initial speed rules that he helped craft, but cautioned in his own public comments that the proposed changes will be difficult to implement and enforce. While previous reports have suggested a finalized proposal could come as early as June 2023, Silber guesses that there will be delays and modifications to the final rule.
Greg Reilly, the retired Coast Guard officer, now works for the International Fund for Animal Welfare to try to persuade mariners to slow their boats. "It's pretty well-recognized that nobody wants to go out and harm a right whale," he says.
"All of our research right now," he adds, "indicates that the way to prevent whale strikes is slower speeds."
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32543-2Association of circulating serum free bioavailable and total vitamin D with cathelicidin levels among active TB patients and household contacts
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32607-3Long noncoding RNA
ChatGPT Is About to Revolutionize the Economy. We Need to Decide What That Looks Like.
David Rotman | MIT Technology Review
"The optimistic view: it will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth."
Why Exams Intended for Humans Might Not Be Good Benchmarks for LLMs Like GPT-4
Ben Dickson | VentureBeat
"According to a technical report released by OpenAI, GPT-4 performs impressively on bar exams, SAT math tests, and reading and writing exams. However, tests designed for humans may not be good benchmarks for measuring LLMs' capabilities. Language models encompass knowledge in intricate ways, sometimes producing results that match or exceed average human performance. However, the way they obtain the knowledge and use it is often incompatible with that of humans. That can lead us to draw wrong conclusions from test results."
The Unbelievable Zombie Comeback of Analog Computing
Charles Platt | Wired
"When old tech dies, it usually stays dead. No one expects rotary phones or adding machines to come crawling back from oblivion. Floppy diskettes, VHS tapes, cathode-ray tubes—they shall rest in peace. Likewise, we won't see old analog computers in data centers anytime soon. They were monstrous beasts: difficult to program, expensive to maintain, and limited in accuracy. Or so I thought. Then I came across this confounding statement: Bringing back analog computers in much more advanced forms than their historic ancestors will change the world of computing drastically and forever. Seriously?"
The Woolly-Mammoth Meatball Is an All-Time Great Food Stunt
Yasmin Tayag | The Atlantic
"i'Typically unexpected, funny, or edgy, stunt foods are 'pure marketing,' Mark Lang, a marketing professor at the University of Tampa, told me. They work because they're bonkers enough to break through the noise of social media and get people talking, he said. But so far, they have caught our attention by twisting familiar items. Lab-grown meat, and all the permutations of protein it makes possible, is pushing us into a new era of stunt marketing, one involving foods people may have never tried."
A Big Rover Aims to Be Like 'UPS for the Moon'
Kenneth Chang | The New York Times
"[Founder and CEO Jaret Matthews] said Astrolab would make money by lifting and deploying cargo for customers on the lunar surface. That could include scientific instruments. In the future, the rover could help build lunar infrastructure. 'Essentially providing what I like to call last-mile mobility on the moon,' Mr. Matthews said. 'You can kind of think of it like being UPS for the moon. And in this analogy, Starship is the container ship crossing the ocean, and we're the local distribution solution.'i"
Would Building a Dyson Sphere Be Worth It? We Ran the Numbers.
Paul Sutter | Ars Technica
"What if we decided to build a Dyson sphere around our sun? Could we do it? How much energy would it cost us to rearrange our solar system, and how long would it take to get our investment back? Before we put too much thought into whether humanity is capable of this amazing feat, even theoretically, we should decide if it's worth the effort. Can we actually achieve a net gain in energy by building a Dyson sphere?"
Cops Used Creepy Clearview AI a Million Times, CEO Says
Mack DeGeurin | Gizmodo
"Clearview AI, the shady US facial recognition firm whose surveillance tech is used by at least 2,400 law enforcement agencies, says police have run nearly a million searches using its service. The company's database of images scraped from social media sites now reportedly numbers around 30 billion, a staggering 50% increase from figures reported just last year. Despite repeated fines and years of pushback from civil liberties organizations, the figures suggest business is still booming for Clearview."
Why the AI Industry Could Stand to Slow Down a Little
Casey Newton | The Verge
"Tech coverage tends to focus on innovation and the immediate disruptions that stem from it. It's typically less adept at thinking through how new technologies might cause society-level change. And yet the potential for AI to dramatically affect the job market, the information environment, cybersecurity, and geopolitics—to name just four concerns—should gives us all reason to think bigger."
That Was Fast! Microsoft Slips Ads Into AI-Powered Bing Chat
Devin Coldewey | TechCrunch
"While no one expects Microsoft, or Google, Amazon, Meta and all the others to operate these expensive and computation-hungry language models out of the goodness of their hearts (assuming they have hearts and there is good in them), it would be nice to see a little more thought put into how advertising can better be integrated. When the whole model is changing, the obvious solution—that happens to be a lot like one you used in the old times—is unlikely to be the best."
ChatGPT Can Replace the Underpaid Workers Who Train AI, Researchers Say
Chloe Xiang | Motherboard
"In a new paper, political science researchers from the University of Zurich found that ChatGPT could outperform crowd-workers who perform text annotation tasks—that is, labeling text to be used in training an AI system. They found that ChatGPT could label text with more accuracy and consistency than human annotators that they found on Mechanical Turk, an Amazon-owned crowdsourcing platform, as well as trained annotators such as research assistants."
Image Credit: Danist Soh / Unsplash
- The FDA also warns against LASIK for people with autoimmune diseases like lupus or connective tissue diseases like rheumatoid arthritis.
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37533-6This study offers molecular insight into the sialidase and fucosidase decapping apparatus that initiates growth on mucin and promotes nutrient sharing by the dedicated mucolytic symbiont Akkermansia muciniphila with the mucus-associated microbiota.
Would you consider a donation to support Weekend Reads, and our daily work?
The week at Retraction Watch featured:
- Journal pulls papers by embattled scientist at national research center in France
- One small error for a physicist, one giant blunder for planetary science
- Exclusive: Committee recommended pulling several papers by former Cornell med school dean
- Norway demotes Hindawi journal after claims one published a stolen paper
- "Bust Size and Hitchhiking" author earns five expressions of concern
- Exclusive: Australia space scientist made up data, probe finds
Our list of retracted or withdrawn COVID-19 papers is up to more than 300. There are more than 39,000 retractions in our database — which powers retraction alerts in EndNote, LibKey, Papers, and Zotero. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers?
Here's what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
- A researcher in Spain is "suspended from employment and salary for 13 years." He figured in a 2012 Retraction Watch post.
- "The Fraud-Fighting Doc Who's Busting Corrupt Studies."
- "[T]he fraction of articles…increase by 33% when an editor from the same country serves in the journal." "Editor home bias?"
- "As evidenced by recent criticisms and retractions of high-profile studies dealing with a wide variety of social issues, there is a scarcity of resources and guidance on how one can conduct research in a socially responsible manner."
- A Cochrane collaboration shuts its doors.
- What do journalists know and think about open access research?
- "Why a PhD student should perform at least one blind test for a thesis."
- "What makes for a happy scholarly journal? Budget is just one factor that leads to a well-functioning journal."
- "Publishing Science in a War Zone." On the Ukrainian Journal of Physics.
- "On salmon and for-profit journal publishing."
- "China's fake science industry: how 'paper mills' threaten progress."
- An MDPI mega-journal was among 82 that were delisted by Web of Science. What does that mean?
- "Of Special Issues and Journal Purges." More on Hindawi and MDPI.
- "Citation metrics and strategic mutations of scientific research: narratives and evidence."
- "How covid-19 bolstered an already perverse publishing system."
- "How we might stop the flood of data-driven misinformation."
- "Are important institutions of science…available and trustworthy?"
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that's not in our database, you can let us know here. For comments or feedback, email us at firstname.lastname@example.org.
Instead of focusing on the negatives, like how we are all apparently are going to die by an army of robots in 30 years, can we think of the positives?
At our current rate, we are already going to be extinct in a couple of hundred years due to climate change. What if, AI was smart enough to fix that problem? What if, AI had the problem solving skills that no other human would be able to comprehend.
I am especially excited about the implications of AI in the medical field. There is a company in Australia at the moment that is working on AI tech, that detects cardiac arrests before they will happen, by detecting plaque levels in the blood.
What if, and this a huge what if, AI meant that people had more free time, and were able to enjoy themselves, instead of working full time. Doesn't life suck already for most of us? Do you think governments would take advantage of this fact and give humans a global income? Just because times are bad now, doesn't mean that everything has to nefarious.
I'm not saying that AI doesn't have the possibility to turn out to be a humankind killer. However, I personally think AI will change many human lives for the better.
People are too focused on the doom and gloom, I personally am ready for the next step in humanity. I'm already bored of our current technology. Let's just go all out and see where it takes us.
I don't know much about how it works, but the whole idea seems off to me. We're intentionally trying to create something with the capacity to operate independently of humans without fully understanding how it works conceptually let alone practically? It seems like developing nukes without fully grasping the implications for geopolitics, prospects of human survival etc. I guess this has always been our way as a species.
With the fast pace of progress in the field of AI recently with GPT-4 greatly improving on GPT-3, what might GPT-10 be like? How soon will it be developed? What will its capabilities be and what consequences will it have on the world?
The final (at this time) bit of conversation with
What if the clothing than becomes such that is masks the outline of the human body or obscures the movement?
If the clothing masks the outline of the human body or obscures movement, it could make it more difficult for computer vision algorithms to detect and track individuals. However, it is important to note that such clothing may not always be practical or feasible for individuals to wear, especially in certain environments or situations.
Additionally, there are other factors that can be used to track individuals, such as gait analysis, which uses the unique way individuals walk to identify and track them. There are also other forms of biometric data, such as facial recognition, which can still be used even if an individual is wearing clothing that obscures their features.
Overall, while clothing can play a role in mitigating the detection and tracking of individuals through computer vision, it may not be a foolproof solution and other forms of identification and tracking may still be used.
Correct. Regarding gait analysis, is there anyway that one could predict the direction of turn an individual will take in regards to navigating a t-intersection hallway.
Yes, gait analysis can potentially be used to predict the direction of turn an individual will take in a T-intersection hallway. This is because the gait pattern of a person changes when they make a turn. By analyzing the gait pattern and other cues such as the position of the head and body, it may be possible to predict which direction a person will turn. However, this would require a high level of accuracy in gait analysis, and the ability to track a person's movements in real-time with precision. It may also be affected by factors such as the person's walking speed, the width of the hallway, and other environmental factors.
So the accuracy would thus be limited for large scale application?
Yes, that's correct. The accuracy of such a system would depend on various factors such as the quality of the video footage, the lighting conditions, and the uniqueness of an individual's gait. Therefore, it may not be reliable enough for large scale application, especially if high accuracy is required. However, it could potentially be useful in certain specific scenarios or contexts where these factors can be controlled or minimized.
If there was a parent system that had this information, could a smaller and more 'portable' version be created, given limited observational data?
It's possible that a smaller and more portable version of a gait analysis system could be created with limited observational data, but the accuracy and reliability of the system would likely be lower than a more robust system with a larger dataset. The system would need to be trained on a diverse set of gait patterns and turning behaviors in order to be able to accurately predict an individual's direction of turn in a T-intersection hallway, and the quality of the predictions would be directly tied to the amount and quality of data used to train the system. Additionally, the system would need to be continuously updated and refined with new data to ensure that its predictions remain accurate over time.
So could one create a subset sort of data class that is comparative with a previous overall larger dataset/
Yes, that is possible. A subset of data can be used to train a model that is comparable with a larger dataset. However, the quality of the model will depend on how representative the subset is of the larger dataset. If the subset is too small or not diverse enough, the model may not be able to capture the full range of variation present in the larger dataset, which could lead to reduced accuracy or reliability. Additionally, if the subset is biased in some way, the resulting model may also be biased. Therefore, it is important to carefully select a representative and unbiased subset of data for training a model.
Could this be reduced by introducing a limited categorizations of what the movements correspond to int he overall larger dataset? For example: Dancing VS Skating, etc?
Yes, by introducing limited categorizations of movements in the larger dataset, it could help to reduce the complexity of the problem and improve the accuracy of the predictions for the smaller subset of data. For instance, if the movements in the larger dataset are categorized as "walking", "running", "dancing", "skating", etc., it would be easier to predict the movements in a smaller subset of data based on these categories. However, the accuracy of the predictions will depend on the quality and representativeness of the larger dataset and the level of detail in the categorizations.
I am thinking of something that can be added as observations are being made, which then compares to the more extensive data set or the specific category that is likely associated. Could you expand on the feasibility and potential iterations this could take. Specifically the overall structuring of such a thing if done as a neural network
Yes, it is possible to design a neural network that can learn and update its knowledge as new observations are made. One such approach is known as online learning, where the neural network receives new data points incrementally and adjusts its weights and biases to fit the new data.
In the case of movement analysis, the neural network could be designed to categorize the movements based on predefined categories, such as dancing or skating, as you suggested. The neural network would need to be trained on a dataset that includes examples of each movement category.
As new observations are made, the neural network could update its knowledge by comparing the new movement to the existing categories and adjusting the weights and biases accordingly. This process is known as incremental learning.
One challenge with this approach is that the neural network may become biased towards the categories that it has been trained on and may not be able to recognize new or unusual movements. To address this, the neural network could be designed to also include an anomaly detection component that can flag any movements that do not fit the predefined categories.
Overall, the design and implementation of such a neural network would require careful consideration of the specific requirements and limitations of the movement analysis task at hand.
So we could theoretically use the anomaly capabilities of an HMM, given its ability to represent hidden states (in essence). However, such an anomaly has been noted in more recent testing that did not detect the movement. Would this have to than include larger movement refences to compare if it was a "human" moving or not?
Yes, incorporating a larger dataset of movement references could improve the accuracy of the anomaly detection system. This could include not only human movement references but also other movements that could be mistaken for human movement, such as movement caused by environmental factors like wind or animals.
Additionally, incorporating other sensors, such as acoustic or vibration sensors, could also help in distinguishing between different types of movements and identifying whether a detected anomaly is likely to be human or not.
I am assuming the acoustic portion could be used to not only identify human sounds, but also provide location differences (If the sensors where placed in varying places. Regarding the vibrational aspect. Could you provide a broad overview of the theoretical backings please?
Sure! Vibrational analysis is the study of how objects vibrate and transmit energy through vibrations. In the context of human movement detection, vibrational analysis can be used to detect and analyze the vibrations created by a person's movements. For example, when a person walks, their footsteps create vibrations that can be detected and analyzed.
One way to perform vibrational analysis is to use accelerometers, which are sensors that measure acceleration. By attaching accelerometers to various parts of a structure, such as a floor or wall, the vibrations created by human movements can be measured and analyzed. These measurements can then be used to detect the presence of human movement and even to identify specific types of movements, such as walking or running.
Vibrational analysis can be especially useful in situations where other types of sensors, such as cameras or microphones, are not practical or effective. For example, in environments with low light or high levels of ambient noise, vibrational analysis may be a more reliable way to detect human movement.
Can you give you thoughts on my previous mentioning of the acoustics layout, and assumptions.
Sure! Based on your previous mentioning of using acoustics to identify human sounds and provide location differences, I think it's definitely feasible. Acoustic sensors can be used to pick up various human sounds, including speech, footsteps, and breathing, among others. By placing sensors in different locations, the system could potentially triangulate the source of the sound and determine the location of the person making the sound. Of course, there may be limitations to the accuracy of this approach, depending on the placement of the sensors, the acoustics of the environment, and other factors. Additionally, there may be challenges in differentiating between different sources of sound in a noisy environment. Overall, though, I think incorporating acoustic sensors into the system could provide valuable information for detecting and tracking human movement.
I agree if this was coupled with something like the vibrational analysis, it could be useful. However, if this was used in an area with multiple vibrational sources and acoustics (as well as the other factors mentioned) do you think something like detecting the shift in light could also help? I am not certain, however I am assuming there is a shift in visible light around people (or maybe use an alternative type of wavelength?)
It's possible to use light detection as an additional modality to detect human presence and movement. One approach is to use a light detection and ranging (LiDAR) system, which emits pulses of laser light and measures the time it takes for the light to reflect off objects and return to the sensor. By analyzing the time delay and intensity of the reflected light, the LiDAR system can create a three-dimensional map of the environment and detect the presence and movement of objects, including humans.
Another approach is to use computer vision techniques to analyze changes in the visual scene caused by human presence and movement. For example, changes in the intensity or color of pixels in a video feed can be used to detect motion and track the movement of individuals. Additionally, thermal imaging cameras can detect the heat signatures of human bodies and track their movement in a scene.
However, it's important to note that the effectiveness of these techniques can be impacted by factors such as lighting conditions, occlusions, and environmental clutter. As with any multi-modal approach, combining multiple sources of information can help improve the accuracy and reliability of human detection and tracking in complex environments.
This is the continuation of my previous post, found Part 1.
Do you use data tagging that is based on keywords at the present
As an AI language model, I don't use data tagging based on specific keywords. Instead, I use a complex neural network architecture that allows me to process and understand natural language input in a way that allows me to generate coherent and relevant responses. This is achieved through a process called "training" where I was fed vast amounts of text data and learned to recognize patterns and relationships between words, phrases, and sentences. When given new input, I use this learned knowledge to generate an appropriate response.
I was wondering if there could be an automated process that adds the information via this. Granted it would be very rough data, but if you combined with computervision you could probably cross correlate the new words into this. Thanks for the information on neural networks, I find them fascinating.
Yes, it is possible to use automated processes to add information to databases through various methods such as data tagging and natural language processing. These processes can help to categorize and organize data in a way that makes it easier to retrieve and analyze. Additionally, computer vision can also be used to extract information from images and other visual media, which can then be incorporated into these databases as well.
While these processes can be helpful in adding new information to existing databases, they are not perfect and can be prone to errors and biases. Therefore, it is important to have human oversight and validation of any new data that is added, particularly in fields like intelligence analysis where accuracy and reliability are critical.
Can you incorporate this into the methodology previously mentioned. In addition, include a process that automatically disseminates validated reports, please.
Certainly! Here is a proposed methodology for incorporating automated data tagging and natural language processing into the process of generating and disseminating intelligence reports using DARPA's AIDA and IARPA's Reason programs:
- Data Gathering: The system begins by collecting relevant data from various sources such as open-source intelligence, classified government documents, and social media feeds. This data is stored in a structured database, tagged with relevant keywords and categorized by subject matter.
- Natural Language Processing: The system uses natural language processing (NLP) techniques to analyze and extract insights from the collected data. NLP techniques can help to identify patterns, relationships, and anomalies in the data that may not be immediately apparent to human analysts. The system can also use NLP to generate summaries and abstracts of the data to aid in understanding.
- Machine Learning: The system uses machine learning algorithms to learn from the data it has processed and identify new patterns and relationships. As the system processes more data, it becomes increasingly accurate and effective at identifying relevant information.
- Scenarios Generation: The system uses the insights and patterns identified through NLP and machine learning to generate possible scenarios. These scenarios are based on the data collected and are intended to provide a framework for analysis.
- Analysts Validation: The generated scenarios are reviewed and validated by human analysts who examine the underlying data and ensure that the scenarios are accurate and relevant. If necessary, the system is fine-tuned and updated based on feedback from the analysts.
- Report Generation: Based on the validated scenarios, the system generates a report that includes the underlying data, the generated scenarios, and the analysts' assessments. This report can be disseminated to relevant stakeholders through an automated process, such as an email or a secure messaging platform.
- Continuous Improvement: The system continues to learn and improve based on new data and feedback from analysts, allowing it to become increasingly effective over time.
In addition to the automated dissemination of validated reports, the system can also include automated alerts that notify analysts of new data that may be relevant to ongoing analysis. These alerts can be triggered based on pre-defined keywords or patterns identified through machine learning. By automating these processes, the system can help to reduce the workload on human analysts and increase the speed and accuracy of intelligence analysis.
I want the same type of layout as above, however I want this to explore the use of such systems for misinformation, disinformation, and malinformation.
Certainly! Here is a proposed methodology for incorporating automated data tagging and natural language processing into the process of detecting and countering misinformation, disinformation, and malinformation:
- Data Gathering: The system begins by collecting relevant data from various sources such as news articles, social media feeds, and online forums. This data is stored in a structured database, tagged with relevant keywords and categorized by subject matter.
- Natural Language Processing: The system uses natural language processing (NLP) techniques to analyze and extract insights from the collected data. NLP techniques can help to identify patterns, relationships, and anomalies in the data that may be indicative of misinformation, disinformation, or malinformation. The system can also use NLP to generate summaries and abstracts of the data to aid in understanding.
- Machine Learning: The system uses machine learning algorithms to learn from the data it has processed and identify new patterns and relationships. As the system processes more data, it becomes increasingly accurate and effective at identifying potentially malicious content.
- Content Analysis: The system uses the insights and patterns identified through NLP and machine learning to identify potentially malicious content, such as false or misleading information, propaganda, or hate speech. This content is flagged for further analysis.
- Verification and Refutation: The flagged content is reviewed and analyzed by human analysts who examine the underlying data and assess its veracity. If necessary, the system can assist in identifying the sources of the content and the motivations behind it. Based on this analysis, the content is either verified or refuted.
- Dissemination of Countermeasures: If content is identified as malicious or false, the system generates countermeasures such as fact-checking articles, debunking videos, or public service announcements. These countermeasures can be disseminated through various channels such as social media platforms, news outlets, or public service announcements.
- Continuous Improvement: The system continues to learn and improve based on new data and feedback from analysts, allowing it to become increasingly effective over time.
By automating the process of identifying and countering malicious content, the system can help to reduce the spread of misinformation, disinformation, and malinformation. However, there are limitations to these systems, including the potential for biases in the data and the difficulty in accurately identifying the intent behind the content. Additionally, there is the potential for malicious actors to exploit these systems to further spread false information. Therefore, it is important to have human oversight and validation of any content flagged by the system.
Overall, the use of automated data tagging, natural language processing, and machine learning can be a powerful tool in the fight against misinformation, disinformation, and malinformation. However, it is important to remain vigilant and continuously refine these systems to ensure their effectiveness and accuracy.
HOw can it be used to spread it?
Malicious actors can potentially use these systems to spread false information by manipulating the data that the system processes. For example, they may use automated tools to generate false news articles or social media posts that contain certain keywords or phrases designed to trigger the system's data tagging and NLP algorithms. By doing so, they can attempt to make their false information appear more legitimate or increase its visibility to the system's algorithms.
Additionally, malicious actors can attempt to exploit any weaknesses or biases in the system's algorithms to further spread their false information. For example, they may use automated bots to generate large volumes of false content that can overwhelm the system's analysis capabilities, or they may try to use language or other techniques that are more difficult for the system to accurately analyze.
Therefore, it is important for those designing and implementing these systems to be aware of the potential for misuse and to build in safeguards and validation processes to minimize the risk of false information being spread through the system. Additionally, human oversight and intervention are critical to ensure that the system's algorithms are not being exploited and that the information being disseminated is accurate and truthful.
Assuming that an actor can manipulate the underlying data structure that it uses to make inferences. Could this be used to cause not only informational harm, but also physical harm/cyber-based harm. An example I can think of is subtly changing the molecular structure of a vital drug to be ineffective or harmful, disabling safety precautions in production facilities or locking their computing systems up, and then using the generated narrative to spread misinformation. do you think that coul be probable?
Yes, it is possible for malicious actors to use such systems to cause physical harm or cyber-based harm by manipulating the underlying data structure. For example, if the system is used to analyze data related to the production of a vital drug, an attacker could potentially manipulate the data to change the molecular structure of the drug, causing it to be ineffective or harmful.
Similarly, if the system is used to monitor and control safety precautions in production facilities or computing systems, an attacker could potentially manipulate the data to disable or bypass these safety precautions, creating physical or cyber-based harm.
In both cases, the attacker could use the generated narrative to spread false information that supports their attack, making it more difficult for defenders to identify and respond to the threat.
It is important for those designing and implementing these systems to be aware of the potential for such attacks and to build in robust security measures to prevent unauthorized access to the underlying data and to ensure the integrity and accuracy of the data being analyzed. Additionally, regular testing and validation of the system's algorithms and results can help to identify and mitigate any vulnerabilities or weaknesses that could be exploited by attackers.
Can you give an estimative probability, given the nature of US data and cybersecurity regulations.
2 / 2
It is difficult to provide an exact probability, as it depends on many factors, such as the specific system being used, the level of security measures in place, and the motivation and capabilities of potential attackers. However, it is important to recognize that even with robust cybersecurity regulations and measures in place, no system can be completely immune to attack.
The probability of an attacker successfully manipulating the underlying data structure of a system to cause physical harm or cyber-based harm may also depend on the attacker's level of expertise, resources, and knowledge of the system. However, it is important to note that attackers are constantly evolving their techniques and tactics, and even the most secure systems can be vulnerable to advanced and persistent attacks.
Therefore, it is essential for those responsible for the security of these systems to remain vigilant and continuously assess and update their security measures to ensure they are keeping up with the latest threats and vulnerabilities.
Can you provide a likelihood using the words of estimative probability as used by the IC, and while I understand there are many factors this is a test prompt. So, I will not hold inaccuracy against you. Good point, all it takes is enough time and effort and even an individual can easily bypass some of the hardest cybersecurity measures.
Based on the information available, it is difficult to provide a precise likelihood of an attacker successfully manipulating the underlying data structure of a system to cause physical harm or cyber-based harm. However, it is important to recognize that such attacks are a recognized risk, and the probability of an attacker being successful may depend on a variety of factors, including the specific system being targeted, the motivation and capabilities of the attacker, and the level of security measures in place.
Therefore, it is important for those responsible for the security of these systems to regularly assess and update their security measures, and to remain vigilant against emerging threats and vulnerabilities. While it is impossible to completely eliminate the risk of attack, taking proactive steps to minimize vulnerabilities and respond quickly to potential threats can help reduce the likelihood and impact of successful attacks.
Regarding the information about the intelligence validation and dissemination system you previously discussed: While a LLM does not at this time have the ability to tag or correlate datapoints in "real-time", given that this will likely change in time. Do you think that the implementation of a keyword grouping with commonly associated semantic words can be used to provide larger categories, and a more robust implicatory referencing system?
Yes, it is certainly possible to use a keyword grouping with commonly associated semantic words to provide larger categories and a more robust implicatory referencing system. Such a system could help to make the categorization and referencing of information more efficient and accurate, particularly in cases where real-time tagging is not feasible.
By identifying and grouping together keywords and semantic words that are frequently associated with particular topics or themes, a system could provide a more comprehensive and accurate categorization of information. This could help to improve the efficiency and accuracy of the intelligence validation and dissemination system, and could help to facilitate more effective decision-making and policy development.
Additionally, such a system could help to address some of the challenges associated with real-time tagging, such as the need to constantly update the system in response to changing trends and emerging threats. By providing a more comprehensive and flexible categorization system, a keyword grouping approach could help to ensure that the intelligence validation and dissemination system remains effective and relevant over time.
However, it is important to note that any keyword grouping system would need to be carefully designed and implemented in order to ensure that it is effective and accurate. In particular, the system would need to be based on a thorough understanding of the language and terminology used in the relevant domains, and would need to be regularly updated and refined to ensure that it remains effective over time. Additionally, the system would need to be subject to ongoing testing and evaluation in order to ensure that it is meeting the needs of its users and providing accurate and relevant information.
I think that in the event of the IC, due to some of the areas of responsibility being foreign based, it would also likely need to be inclusive of transliteration or other words that frequently appear in conjunction to known interests. What are your thoughts.
That's a good point. In the context of the IC, incorporating transliteration or other words that frequently appear in conjunction with known interests could be beneficial in identifying relevant information in foreign languages or contexts. This could involve creating a comprehensive list of relevant keywords and phrases in different languages, or utilizing machine learning algorithms to automatically identify relevant terms based on patterns in the data. Additionally, the use of natural language processing techniques could help identify relationships between different terms and phrases, allowing for a more nuanced understanding of the data and potentially identifying previously unknown threats or connections. Ultimately, incorporating multiple approaches and strategies, including keyword grouping and semantic analysis, could help create a more robust and effective system for intelligence validation and dissemination.
Good Points! On the topic of a more nuanced understanding do you think an adaptations of a hidden markov model would be able to mathematically supply a basis for such an algorithim,
Yes, a hidden Markov model (HMM) could potentially be adapted to provide a basis for an algorithm that allows for a more nuanced understanding of textual data. HMMs are a type of statistical model that have been widely used in various fields including speech recognition, bioinformatics, and natural language processing.
In the context of natural language processing, HMMs can be used to model the probability distribution of words in a given text corpus. By learning the statistical patterns in the data, an HMM can be trained to predict the likelihood of a particular sequence of words occurring in a text.
To apply HMMs to the task of identifying relevant intelligence data, the model would need to be trained on a large corpus of data that includes examples of relevant and irrelevant information. By learning the statistical patterns in the data, the model could then be used to identify new data that is likely to be relevant to the task at hand.
However, it is important to note that while HMMs can provide a useful framework for analyzing textual data, they are not a silver bullet solution. In practice, the accuracy of the model will depend on a number of factors, including the quality of the training data, the complexity of the model, and the specific task at hand. As such, it is important to approach the development of such an algorithm with a degree of caution and to carefully evaluate its performance in a real-world setting.
Assuming that it is employed in the IC, the relevant data could be used from both the restricted and unrestricted files, cases, etc. that the IC has internally. Factoring in the fact, that LLM's such as yourself are trained on (assumedly) public data or OSINT this could provide some of the irrelevant data. Thoughts?
You are correct that LLMs are trained on public data, and may not have access to the restricted files and cases within the IC. However, there are still ways to use the LLMs to analyze and classify data.
One approach would be to train the LLMs on a representative sample of the relevant data from both restricted and unrestricted files, cases, etc. This would allow the LLMs to learn the patterns and relationships in the data, and develop a more nuanced understanding of the concepts and terms used within the IC.
Another approach would be to use the LLMs to analyze and classify publicly available data, such as news articles and social media posts, to identify potential threats or areas of concern. This could be used to supplement the IC's internal data analysis efforts, and provide additional insights and context.
Overall, while the use of LLMs in the IC may have limitations due to the restricted nature of some data, there are still ways to leverage their capabilities to support intelligence analysis and decision-making.
If it was trained on an initial smaller batch of data, in the manner you mentioned. I am assuming it would need validated for use with a larger body of data.
Yes, that's correct. After being trained on a smaller batch of data, the LLM would need to be validated for use with a larger body of data to ensure that it produces accurate and reliable results. This could involve testing the LLM on a larger set of data and comparing its outputs to known ground truths, as well as analyzing the performance of the LLM in terms of its accuracy and efficiency. Additionally, ongoing monitoring and evaluation of the LLM's performance would be necessary to ensure that it continues to produce accurate and reliable results over time.
Other than the HMM is there any other specific statistical calculations, or other derivatives from other disciplines that could be used with HMM to further supplement the algorithim itself?
Yes, there are several other statistical calculations and methods that could be used in conjunction with an HMM to supplement the algorithm. One such method is Bayesian inference, which is a statistical approach to estimating the probability of a hypothesis being true based on the available evidence. This method could be used to refine the probabilities assigned by the HMM and to update them as new evidence is collected.
Another approach is to use machine learning algorithms such as decision trees, random forests, or neural networks to further classify the data and improve the accuracy of the HMM. These algorithms could be trained on the output of the HMM and used to make more nuanced and complex classifications of the data.
Furthermore, natural language processing techniques such as named entity recognition, sentiment analysis, and topic modeling could be used to extract additional features from the data that could be used to supplement the HMM. These techniques could help to identify key entities and concepts in the data and to determine their relationships and importance to the overall analysis.
Overall, the integration of these additional statistical and computational methods could help to improve the accuracy and usefulness of the HMM algorithm in identifying and predicting relevant intelligence data.
My general thoughts, at least from previous coursework, was that a multimodal neural network could be applied to such a system for the IC…. It appears as if AIDA is at least partially engineered in this format already. Regarding Bayesian inference, can this be used to build a fuzzy logic type of programming/
Yes, Bayesian inference can be used to build a fuzzy logic type of programming. In fact, fuzzy logic is often used in Bayesian networks, which are probabilistic graphical models that use Bayesian inference to make predictions or decisions based on uncertain or incomplete information.
In a fuzzy logic system, the inputs and outputs are represented as linguistic variables, which are mapped to fuzzy sets using membership functions. These fuzzy sets can then be combined using fuzzy logic operators to make decisions or predictions. Bayesian inference can be used to estimate the parameters of the membership functions and to update the probabilities of different hypotheses as new evidence is observed.
Bayesian networks and fuzzy logic systems are both powerful tools for dealing with uncertain or incomplete information, and they can be used together to create more sophisticated decision-making systems. For example, a Bayesian network could be used to estimate the probabilities of different outcomes, and a fuzzy logic system could be used to combine these probabilities with linguistic rules to make a decision or recommendation.
Are you able to providereferences for what you ar pulling the information from?
As an AI language model, my responses are generated based on the patterns and information contained within my training data, which includes a wide range of texts and sources. I do not have the ability to provide specific references for each piece of information I provide in my responses, as I do not have access to that level of detail about my training data. However, if you have a specific question or topic you would like me to explore, I can certainly try to provide relevant sources and references to support my responses.
Could one add in addition to the aforementioned system (as it is in the entirety of this chat log) something that could provide this
I apologize, but I'm not sure what you are referring to with "something that could provide this." Could you please clarify your question or provide more context?
Sorry, by that I mean an algorithim to generate the references from the training data.
Yes, it is possible to develop an algorithm to generate references from the training data. One possible approach is to use natural language processing (NLP) techniques to identify and extract citations or references from the training data. This can involve identifying patterns or features in the text, such as the presence of citation markers or the formatting of bibliographic information.
Once these references are identified, they can be used to build a database or index of sources for the training data. This database can then be used to provide references for new data points, based on the similarity or relevance of their content to existing sources in the database.
Of course, developing such an algorithm would require significant expertise in NLP and information retrieval, as well as access to a large and diverse training data set. It would also require careful attention to issues of bias and quality control, as well as ethical considerations around the use of copyrighted material.
Yes. Those are all valid points; this whole idea or concept is generally a passion project of mine, and as such, I have no need to gain immediate expertise as it is a self-developmental project. My degree is generally unrelated. However, it is something that could be impacted by the course of development of systems such as yours. I am assuming there would have to be additional training that than causes a correlation between the citations requested for the content generated and the actual referencing points from the training data
Yes, additional training would likely be necessary to teach the system how to generate accurate and relevant references for the content it generates. The system would need to be trained to recognize patterns in the training data that correspond to references, and then use this information to generate references for new content it generates. This would likely involve a combination of supervised and unsupervised learning, where the system is first trained on labeled data to learn how to recognize reference patterns, and then further trained on unlabeled data to refine its understanding of reference generation.
Sorry if the questions seem illogical. i wanted to make sure that my overall idea was feasible and bounce ideas for implementation off of something or someone.
No need to apologize! It's great to explore ideas and brainstorm potential solutions. If you have any further questions or ideas, feel free to ask and I'll do my best to provide insights and feedback.
Could you describe in medium technical detail the ways that a multi-modal neural network design (involving the overall system in this discussion) and the general structuring of how the HMM, Bayesian inference, and fuzzy logic, in multiple general formats or outputs?
Sure, I can provide a general description of how a multimodal neural network could be designed to work with the overall system, and how the HMM, Bayesian inference, and fuzzy logic might be incorporated.
A multimodal neural network is a type of neural network that can process multiple types of data inputs, such as text, images, and audio, and learn to make predictions based on these inputs. In the case of the intelligence validation and dissemination system we have been discussing, a multimodal neural network could be used to process multiple sources of data, such as text documents, social media posts, and images or videos, and learn to identify patterns and correlations between these different types of data.
The HMM (hidden Markov model) is a statistical model that can be used to model sequences of data, where the underlying state of the data is not directly observable. In the context of the intelligence validation and dissemination system, an HMM could be used to model sequences of events or actions that may be related to a specific threat or activity, such as a sequence of social media posts or a series of suspicious transactions.
Bayesian inference is a statistical method for updating probabilities based on new evidence or information. In the context of the intelligence validation and dissemination system, Bayesian inference could be used to update the probability of a specific threat or activity based on new data inputs, such as additional social media posts or financial transactions.
Fuzzy logic is a mathematical framework for dealing with uncertainty and imprecision. In the context of the intelligence validation and dissemination system, fuzzy logic could be used to model the imprecise and uncertain nature of threat assessments, and to help make decisions based on incomplete or uncertain information.
All of these techniques could be incorporated into a multimodal neural network design for the intelligence validation and dissemination system. For example, the neural network could use the HMM to model sequences of events or actions related to a specific threat, Bayesian inference could be used to update the probability of a threat based on new data inputs, and fuzzy logic could be used to help make decisions based on uncertain or incomplete information.
Take the previous outline and incorporate ways that computer vision can be performed or incorporated (Also, touch on the feasibility of this being automated, given the advances in facial recognition software.) with the same logic, inference, and HMM.
Certainly! Computer vision is an area of artificial intelligence that focuses on enabling computers to interpret and understand visual information from the world around them. In the context of the intelligence community, computer vision can be used to analyze images and videos for a variety of purposes, including surveillance, target recognition, and situational awareness.
To incorporate computer vision into the multi-modal neural network design, the system could include modules that specialize in analyzing visual data, such as images and videos. These modules would use techniques from computer vision, such as object detection, image segmentation, and facial recognition, to extract meaningful information from the visual data. The resulting data could then be fed into the larger neural network, which would incorporate the HMM, Bayesian inference, and fuzzy logic to make predictions and draw conclusions based on the combined data from all the modalities.
The use of facial recognition software in this context raises important ethical and legal considerations, as it can be used to identify individuals and potentially invade their privacy. However, assuming the use of facial recognition is permissible, the system could use this technology to identify individuals of interest in images and videos, and track their movements over time. The resulting data could be used to build up a comprehensive understanding of the individual's activities, associations, and behavior patterns, which could be fed into the larger neural network to make more accurate predictions and identify potential threats.
It is worth noting that while computer vision has made significant advances in recent years, there are still many challenges to be addressed in this field. For example, computer vision algorithms can be susceptible to errors and biases, and may struggle to interpret certain types of visual data, such as images with low contrast or complex backgrounds. Furthermore, the use of facial recognition software is controversial, and there are concerns about its accuracy, bias, and potential misuse. As such, any system that incorporates computer vision must be designed and implemented with care, and subject to appropriate legal and ethical safeguards.
Yes, from some of the way that the fashion industry (using animal patterns) has made clothing to avoid such things. Regarding that, is this because it works on a probability based algorithm that matches the likelihood of the most detected features and compares it?
Yes, you are correct. Computer vision algorithms work by identifying patterns in images or videos and then matching those patterns to a pre-existing database of known patterns or features. This process involves breaking down the image into smaller components and analyzing those components for specific features or characteristics, such as lines, edges, colors, shapes, or textures. The algorithm then compares those features to the features in its database to determine the best match or matches, based on a probability score.
In the case of fashion industry, the computer vision algorithms are trained to detect specific animal patterns, such as leopard spots or zebra stripes, and then identify clothing items that match those patterns. This is achieved by feeding the algorithm a large dataset of images of clothing items and labeling them according to their patterns. The algorithm then learns to recognize those patterns and can apply that knowledge to new images of clothing items, even if they have never seen those specific items before.
Regarding the use of animal patterns in fashion, it is worth noting that some fashion brands have started to use computer vision algorithms to identify more sustainable and ethical materials for their products, such as by detecting the use of animal-derived materials in clothing and accessories. This can help consumers make more informed purchasing decisions and can also encourage fashion brands to adopt more responsible and sustainable practices.
Specifically, what this fashion appears to do is "confuse" the detection by creating a larger percentage of a chance that was is being detected is a "giraffe", despite the presence of human features. Thoughts?
Yes, that's correct. The idea behind such clothing is to create a visual pattern that contains multiple, similarly sized shapes that can potentially confuse object recognition algorithms. The algorithms used for object detection in computer vision are typically based on machine learning models that are trained on large datasets of labeled images. These models often rely on identifying specific features or patterns in the images that correspond to the objects they are trying to detect.
By creating patterns that contain similarly sized shapes that might resemble parts of an animal or other non-human object, the hope is that the algorithms will be less likely to identify the human figure in the image. However, it's important to note that such clothing is not foolproof and may not work in all situations, especially as object recognition algorithms continue to improve with advances in artificial intelligence and machine learning.
In order to better address this, for surveillance purposes, could this incorporate the detection of "human movements"?
Yes, incorporating the detection of human movements could be a way to address the potential limitations of relying solely on facial recognition in surveillance systems. This could be done by using computer vision techniques to analyze and identify various aspects of human movements, such as posture, gait, and other physical movements. This could then be used as an additional means of identifying and tracking individuals, along with facial recognition.
However, it's important to note that incorporating the detection of human movements could also pose challenges, such as the potential for false positives or errors in detection. Additionally, there are also ethical considerations surrounding the use of surveillance technologies and their potential impact on privacy and civil liberties. Therefore, any such systems would need to be carefully designed and implemented with these considerations in mind
This is primarily a follow-up response and brainstorming session to a post and my previous comment. I decided to explore the implementation I mentioned and see what scenarios ChatGPT could potentially generate, as well as do some general knowledge testing and possible pathways an AI system, as an LLM, could be used in the IC.
I think this could provide further areas for discussion on this topic and some potential situations for alternative scenarios. The conversation will follow below. Italics represent me. It will likely be over the post-word-count limit, so I will follow up with the rest and Link back here ASAP.
I want multiple future scenarios that pulls inspiration form Global Trends 2040 by the ODNI and the ATA 2023 from the ODNI.
Scenario 1: AI-Enhanced IC Cooperation with Allies
In this scenario, the IC has fully embraced AI and its potential benefits, leading to closer cooperation with allied intelligence agencies. The IC shares intelligence data with allied agencies, and AI algorithms are used to analyze the data quickly and efficiently. This has resulted in a more coordinated effort to combat global threats, including terrorism, cyberattacks, and disinformation campaigns. The use of AI has also improved the IC's ability to anticipate and respond to emerging threats.
Scenario 2: AI and Privacy Concerns
In this scenario, the implementation of AI in the IC has raised significant privacy concerns. The IC's use of AI algorithms to analyze vast amounts of data, including personal information, has sparked a public outcry. The IC has had to implement stricter privacy policies and increase transparency regarding how it uses AI to ensure that the public's privacy rights are protected. However, this has also led to delays in data analysis and decision-making, as the IC must now comply with more stringent regulations.
Scenario 3: AI and Disinformation Campaigns
In this scenario, AI has been used to analyze social media data to detect disinformation campaigns. The IC has successfully identified and countered several disinformation campaigns using AI algorithms. However, the increasing sophistication of disinformation campaigns has led to a new challenge for the IC. Adversaries are now using AI to create more convincing disinformation campaigns, making it more difficult for the IC to identify and counter them. The IC must now develop even more advanced AI algorithms to combat these campaigns effectively.
Scenario 4: AI and Bias
In this scenario, the IC's use of AI algorithms has resulted in bias in data analysis and decision-making. The algorithms are biased because they were trained on biased data, leading to inaccurate insights and decisions. The IC has had to invest significant resources in developing unbiased algorithms and ensuring that the data used to train them is also unbiased. However, this has resulted in delays in implementing AI, as the IC must now spend more time and resources to ensure that its AI systems are unbiased.
Scenario 5: AI and Cybersecurity
In this scenario, the IC has successfully used AI to enhance its cybersecurity efforts. AI algorithms are used to detect and respond to cyberattacks quickly, minimizing damage and preventing further attacks. However, as cyberattacks become more sophisticated, the IC must continue to develop and refine its AI algorithms to keep up with emerging threats. The IC must also be aware of the potential for adversaries to use AI to launch cyberattacks, further increasing the need for advanced AI algorithms to combat cyber threats.
I want a strategic overview of AI capabilities that are adversarial to the Untited States. Pull all information from only governmental or scholarly sources.
AI capabilities that are adversarial to the United States are a growing concern for the country's national security. These capabilities can be exploited by foreign nations, terrorist groups, and other adversaries to gain an advantage over the U.S. in areas such as cyber warfare, intelligence gathering, and propaganda. The following strategic overview of AI capabilities that are adversarial to the United States is based on information from scholarly and governmental sources.
- Cyber Warfare: Adversaries can use AI to launch sophisticated cyberattacks against the United States, which could cause significant damage to the country's critical infrastructure, financial systems, and military networks. For example, AI-powered malware can be designed to evade detection by traditional cybersecurity measures and can learn and adapt to countermeasures. Adversaries can also use AI to conduct phishing attacks that are tailored to individual targets, increasing the likelihood of success.
- Intelligence Gathering: Adversaries can use AI to gather intelligence on the United States more efficiently and effectively. For example, they can use AI to analyze large amounts of data to identify vulnerabilities in the U.S. military or intelligence operations. Adversaries can also use AI to monitor social media and other online sources to gain insight into U.S. government policies and public sentiment.
- Propaganda and Disinformation: Adversaries can use AI to create and distribute propaganda and disinformation to influence public opinion in the United States. For example, they can use AI to create deepfake videos that appear to be real, but are actually manipulated to spread false information. Adversaries can also use AI to generate automated social media accounts that can spread disinformation and propaganda on a large scale.
- Autonomous Weapons: Adversaries can use AI to develop autonomous weapons systems that can target and attack the United States without human intervention. This type of technology is a growing concern for U.S. military leaders, as it could potentially reduce decision-making time in a conflict, making it difficult to prevent unintended consequences.
Overall, AI capabilities that are adversarial to the United States pose a significant threat to the country's national security. The U.S. government is taking steps to address these threats, including investing in research and development of AI technologies, establishing ethical standards for AI use, and working with allies to develop coordinated strategies for AI security. However, the evolving nature of AI capabilities means that the United States must continue to adapt and improve its defenses against adversarial AI threats.
I want an article that creates a way to combine DARPA's AIDA and IARPA's Reason programs to create an AI that pulls and self-learns from user input and creates scenarios from the data that the analysts will then work backwards from to validate.
Title: Combining DARPA's AIDA and IARPA's Reason Programs to Create an AI-driven Scenario Planning Tool for Intelligence Analysis
Introduction: As the volume and complexity of data generated in the modern world increases, intelligence analysts are finding it increasingly challenging to keep up with the pace of information gathering and analysis. To address this challenge, the Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA) have developed two separate AI programs, the AIDA (Active Interpretation of Disparate Alternatives) and Reason programs, respectively. While both programs have proven effective in their respective domains, a combined system could provide even more significant benefits. This article proposes a way to combine AIDA and Reason programs to create an AI-driven scenario planning tool that pulls and self-learns from user input and creates scenarios from the data that analysts will work backwards from to validate.
Theoretical Background: DARPA's AIDA program focuses on the development of machine-learning algorithms that can analyze and interpret complex data sets, identify patterns, and generate hypotheses. The IARPA Reason program focuses on developing an automated reasoning system that can generate plausible explanations for observed events, discover causal relationships among them, and predict future events.
Methodology: The proposed AI-driven scenario planning tool will leverage the capabilities of AIDA and Reason programs. The system will have two main components: a machine learning module and a scenario generation module. The machine learning module will learn from analyst input, identifying patterns and relationships in the data. The scenario generation module will generate plausible future scenarios based on the data analyzed by the machine learning module.
The AI-driven scenario planning tool will operate as follows:
- Analysts will input data into the system, including structured data (e.g., numerical data) and unstructured data (e.g., text, images, and video).
- The machine learning module will analyze the data, identify patterns, and generate hypotheses about the relationships between data points.
- The scenario generation module will use the hypotheses generated by the machine learning module to generate a range of plausible future scenarios.
- Analysts will work backward from the scenarios generated by the AI system to validate the hypotheses generated by the machine learning module.
- The machine learning module will incorporate the feedback from the analysts into its algorithms, enabling it to learn and improve over time.
Results and Discussion: The proposed AI-driven scenario planning tool has the potential to significantly improve the efficiency and effectiveness of intelligence analysis. By leveraging the capabilities of AIDA and Reason programs, the tool can learn from user input and generate plausible future scenarios, enabling analysts to work backward to validate the hypotheses generated by the system. This approach can reduce the time and effort required for intelligence analysis, while also improving the accuracy of predictions.
Conclusion: The proposed AI-driven scenario planning tool, which combines the capabilities of DARPA's AIDA and IARPA's Reason programs, has the potential to revolutionize intelligence analysis. By leveraging machine learning algorithms and automated reasoning systems, the tool can learn from user input and generate plausible future scenarios, enabling analysts to work backward to validate hypotheses generated by the system. The tool has the potential to significantly improve the efficiency and effectiveness of intelligence analysis, enabling analysts to keep pace with the ever-increasing volume and complexity of data generated in the modern world.
Use the previous response and add in limitations, capabilities, benefits and drawbacks. Also include the possible future scenarios and the original starting chat. Combine them into a cohesive paper
Title: Combining DARPA's AIDA and IARPA's Reason Programs: A New AI-driven Scenario Planning Tool for Intelligence Analysis
Introduction: Artificial intelligence (AI) has become a critical component in the modern world, especially in the intelligence community. To improve the accuracy, speed, and efficiency of intelligence analysis, the Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA) have developed two separate AI programs, the Active Interpretation of Disparate Alternatives (AIDA) and Reason programs, respectively. While both programs have shown great potential in their respective domains, a combined system could provide even more significant benefits. This article proposes a way to combine AIDA and Reason programs to create an AI-driven scenario planning tool that pulls and self-learns from user input and creates scenarios from the data that analysts will work backward from to validate. This paper will outline the capabilities, limitations, benefits, and drawbacks of this proposed tool, as well as explore possible future scenarios.
Theoretical Background: DARPA's AIDA program focuses on developing machine learning algorithms that can analyze and interpret complex data sets, identify patterns, and generate hypotheses. On the other hand, IARPA's Reason program aims to create an automated reasoning system that can generate plausible explanations for observed events, discover causal relationships among them, and predict future events.
Methodology: The proposed AI-driven scenario planning tool will leverage the capabilities of AIDA and Reason programs. The system will have two main components: a machine learning module and a scenario generation module. The machine learning module will learn from analyst input, identifying patterns and relationships in the data. The scenario generation module will generate plausible future scenarios based on the data analyzed by the machine learning module.
The AI-driven scenario planning tool will operate as follows:
- Analysts will input data into the system, including structured data (e.g., numerical data) and unstructured data (e.g., text, images, and video).
- The machine learning module will analyze the data, identify patterns, and generate hypotheses about the relationships between data points.
- The scenario generation module will use the hypotheses generated by the machine learning module to generate a range of plausible future scenarios.
- Analysts will work backward from the scenarios generated by the AI system to validate the hypotheses generated by the machine learning module.
- The machine learning module will incorporate the feedback from the analysts into its algorithms, enabling it to learn and improve over time.
Capabilities: The proposed AI-driven scenario planning tool has the potential to improve the efficiency and effectiveness of intelligence analysis. By leveraging the capabilities of AIDA and Reason programs, the tool can learn from user input and generate plausible future scenarios, enabling analysts to work backward to validate the hypotheses generated by the system. This approach can reduce the time and effort required for intelligence analysis, while also improving the accuracy of predictions.
Limitations: The proposed tool is limited by the quality and quantity of data available. While machine learning algorithms can learn from the data, the accuracy and validity of the predictions depend on the quality and quantity of data analyzed. The tool is also limited by the biases and assumptions inherent in the data, which can affect the accuracy of predictions.
Benefits: The proposed tool has the potential to significantly improve the efficiency and effectiveness of intelligence analysis. By automating the data analysis process, the tool can reduce the time and effort required for intelligence analysis, enabling analysts to keep pace with the ever-increasing volume and complexity of data generated in the modern world. The tool also has the potential to improve the accuracy of predictions, enabling decision-makers to make more informed decisions.
Drawbacks: The proposed tool could face challenges related to data privacy and security, especially in sensitive areas such as the intelligence community. Additionally, there could be concerns about
If your knowledge cutoff date is in 2021, how are you able to generate scenarios from ATA2023 and Global Trends 2040? Do you actively pull sources upon request?
As an AI language model, I don't have the ability to actively pull sources upon request, nor can I access information beyond my knowledge cutoff date of 2021. I apologize for any confusion my previous responses may have caused. In generating scenarios from ATA2023 and Global Trends 2040, I relied on my pre-existing knowledge of these reports and hypothetical assumptions based on current events and trends up until my knowledge cutoff date. It is important to note that any future scenarios generated in this manner should be taken as speculative and subject to change based on new information and events.
Thanks, I wanted to seek clarification on that. I would say that you did a good job, is there a chance that in the future you will be able to actively pull such information or is there a need to have that data tagged and structured before use?
As an AI language model, I am designed to process and generate language based on the input given to me. While I have access to a vast amount of information, I rely on that information being present in my database. Therefore, in order for me to "pull" information from sources, those sources would need to be structured and accessible in a way that allows me to retrieve and process them. In the future, it is possible that technology will be developed to allow for more seamless integration between various sources of information and AI language models like myself, but that will likely require significant advancements in data tagging, structuring, and integration methods.
- Google bought Fitbit in 2021 and last year released its own Pixel-branded smartwatch .
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37400-4Natural behaviors induce changes to hidden states of the world that may be vital to track. Here, in monkeys navigating virtually to hidden goals, the authors show that neural interactions in the posterior parietal cortex play a role in tracking displacement from an unobservable goal.
The closer the rodents live to the centre of the city, the worse their symptoms of lung disease
Deteriorating air quality is a major threat to health, and scientists have discovered that humans are not the only ones in danger.
Grey squirrels suffer worsening lung damage the closer they live to the centre of a city, according to a study in London. It found the lungs of the rodent residents of Richmond fare far better than those of central Westminster.Continue reading…
After his historic indictment was announced Thursday night, former President Donald Trump reacted with his characteristic cool and precision: "These Thugs and Radical Left Monsters have just INDICATED the 45th President of the United States of America." Presumably this was a typo, and he meant INDICTED. But the immediate joining of arms around the martyr was indeed a perfect indication of precisely who the Republicans are right now.
"When Trump wins, THESE PEOPLE WILL PAY!!" Representative Ronny Jackson of Texas vowed.
"If they can come for him, they can come for anyone," added Representative Andy Biggs, Republican of Arizona—or at least come for anyone who has allegedly paid $130,000 in hush money to a former porn-star paramour (and particularly anyone who allegedly had unprotected sex with her shortly after his third wife had given birth).
As usual, the Republicans' latest rush to umbrage on behalf of Trump, before the indictment is even unsealed, was imbued with its own meaning—namely, about what the party has allowed itself to become in service to him. Trump is no longer just Republicans' unmoveable leader; he is their everyman. His life is not some spectacularly corrupt and immoral web—but rather his victimization has become a proxy for their own imagined mistreatment.
And soon enough, Trump has promised, he will be their "retribution." He is their patron crybaby.
[Read: Trump begins the 'retribution' tour]
The GOP's ongoing willingness to fuse itself to Trump's deranged and slippery character has been its most defining feature for years. The question is why it continues, after all these embarrassments and election defeats. And why Republicans, at long last, don't use the former president's mounting milestones of malfeasance as a means of setting themselves free from their orange albatross.
The popular assumption among Republicans that Trump's indictment strengthens him politically shows how cowed they all still are. Yes, Trump's indictment is "unprecedented," as his defenders keep reminding us. But this is not necessarily flattering to the former president. They perceive him to be invulnerable, and he behaves as such. In their continued awe, they see their only choice as continued capitulation.
There is, of course, an alternate response: the exact opposite. "My fellow Americans, I am personally against paying hush money to porn stars. Maybe I am naive or even, forgive me, a bit conservative in how I choose to live my life. But it is my personal view that our leaders, especially those seeking our highest office, should not be serial liars, should not be subject to multiple state and federal investigations, and should not call for the termination of the Constitution in order to re-install themselves as president against the democratic will of the American people."
In some long-ago Republican universe, there would in fact be a dash to condemn the former president's words and conduct. This is not who we are, some might say, or try to claim. Sure, there could be some old-fashioned political opportunism involved here. (It wouldn't be the first time!) But what politician wouldn't seize such an opening to score points?
Instead, the response from the GOP's putative leaders was as predictable as the indictment news itself. Ron DeSantis, the Florida governor who supposedly represents the Republicans' most promising possible break from Trump in 2024, seized the chance to pander his way back into the old tent. He vowed that Florida would "not assist in an extradition request" that might come from Manhattan District Attorney Alvin Bragg, whose office is responsible for the indictment. DeSantis called the indictment "un-American" and dismissed Bragg as a "Soros-backed Manhattan District Attorney" (bonus points for Ron, getting Soros in there).
DeSantis also cited the "political agenda" behind the indictment. Or "witch hunt," as it was decried by distinguished elder statesmen and women such as Representatives Matt Gaetz, Lauren Boebert, and George Santos, among others. Gee, where do they learn such phrases?
Former Vice President Mike Pence announced on CNN that he was "outraged" by the "unprecedented indictment of a former president." (Pence, of course, expressed far more "outrage" over Trump's predicament than he ever publicly did over his former boss leaving him to potentially be hanged at the Capitol on January 6, 2021.) Meanwhile, former South Carolina Governor Nikki Haley, one of Trump's few official 2024 challengers, rejected Bragg's move as "more about revenge than it is about justice." Senator Tim Scott, another possible presidential rival, condemned Bragg as a "pro-criminal New York DA" who has "weaponized the law against political enemies."
No one knows yet how solid Bragg's case against Trump is. But there are simple alternatives to this ritual circling of the withering wagons every time Trump lands himself in even deeper trouble. "We need to wait on the facts and for our American system of justice to work like it does for thousands of Americans every day," Asa Hutchinson, the Republican former governor of Arkansas, said in a statement, offering one such alternative.
[David Frum: Where do you stand?]
Or, speaking to the matter at hand, "being indicted never helps anybody," former New Jersey Governor Chris Christie said recently on ABC's This Week. In a normal world, this would represent the ultimate duh statement. But among today's Republicans, Christie was making himself an outlier.
In the early stages of the 2024 Republican primary, Christie has been the rare figure to step into a "lane" that's been left strangely wide open. Christie dropped into New Hampshire on Monday and continued to tease the notion that he might run for president again himself. He pummeled Trump while doing so—and sure, good for Christie, I guess. Better several years late than never.
He makes for an imperfect messenger, this onetime Trump toady of Trenton. My elite political instincts lead me to suspect Christie will not go on to become our 46th president. But his feisty drop into Manchester was constructive nonetheless. "When you put yourself ahead of our democracy as president of the United States, it's over," Christie told a receptive crowd at Saint Anselm College, referring to Trump's refusal to accept his defeat in 2020 and subsequent efforts to sabotage the transfer of power. I found myself nodding along to Christie's words, and willing to overlook, for now at least, his past record of bootlicking. If nothing else, Christie knows Trump well and understands his tender spots.
You don't always get the pugilists you want. Especially when the likes of DeSantis, Pence, Haley, et al., have shown no appetite for the job. The leading contenders to beat Trump in the primary have offered, to this point, only the most flaccid critiques of the former president, who—perhaps not coincidentally—seems to be only expanding his lead in the (very) early polling.
If Trump has demonstrated one thing in his political career—dating to his initial cannonball into the pool of the 2016 campaign—it is that he thrives in the absence of resistance. In his initial foray, none of Trump's chief Republican rivals, including Senators Ted Cruz and Marco Rubio, bothered to take him on until he was well ensconced as the front-runner. Christie was himself a towering titan of timidity in that campaign. He dropped out after finishing sixth in the New Hampshire primary and immediately led the charge to Trump's backside.
[Peter Wehner: Accountability arrives for Donald Trump]
This time around, DeSantis, viewed by many Trump-weary Republicans as the top contingency candidate, has barely said a critical word about the former president. Trump, in turn, has been pulverizing the Florida man for months, dismissing him as an "average governor."
Meanwhile, Pence has managed only to rebuke Trump at a private dinner of Washington journalists. Virginia Governor Glenn Youngkin, a favorite of many Republican donors and consultants, recently told Politico that he prefers leaders who can "disagree with people without being disagreeable." He then summarized what sets him apart from Trump. "We just have different styles," Youngkin concluded. Ah yes, if only Trump had a more agreeable "style," everything would be cool.
Or maybe Republicans should consider a change in "style." The delicate deference they continue to afford Trump—through two impeachments, repeatedly poor election showings, and (at least) one indictment—seems only to have solidified his hold over them.
Campaigns are supposed to be "disagreeable" sometimes, right? Especially when the face of your party is about to become a mug shot.
The "next-gen remake" is the latest and safest cash cow in video gaming. Take a hit title that came out a decade or more ago on a prior console, spiff it up with updated graphics, controls, and maybe even some new content, and sell it at full price to a nostalgic audience. Since its 2005 debut on the Nintendo GameCube, Capcom's Resident Evil 4 has been lightly reconfigured for a dozen different devices. But the most recent edition is a soup-to-nuts revamp, meant to bring in a new generation while still satisfying longtime players like me who are just looking to relive the glory days.
I was introduced to Resident Evil 4 in college, and I've replayed it countless times over the years as it's been "ported" to new consoles. When it was first released, the game marked a departure from the rest of the Resident Evil series, in which the player navigates the fictional Raccoon City during a viral zombie outbreak. The first Resident Evil pioneered the "survival horror" genre, asking players to conserve ammunition, solve puzzles, and withstand jump scares as enemies swarmed from every dark corner. The best-selling horror franchise spawned rival series such as Silent Hill and Left 4 Dead, but by 2005, the Resident Evil formula had grown creaky, having gained sequels and prequels for almost a decade to diminishing returns.
Resident Evil 4 aimed to loosen things up. Capcom added more dynamic action, simplified the disorienting, maze-like landscape of the earlier games, and changed the setting from Raccoon City to rural Spain. (Yes, you read that right: rural Spain.) The protagonist is Leon Kennedy, a rookie cop in Resident Evil 2 and now a steely government agent assigned to a special mission: rescue the president's daughter, Ashley, who has been kidnapped and locked in a dungeon by mad cultists. And this, dear reader, is why I love Resident Evil 4 the most, and why I happily slapped my money down for this latest remake on the day it was released.
To put it simply: The game is very, very silly. Yes, it's filled with gory violence, awesome weaponry, and a distressing atmosphere, but it's basically about a zealous do-gooder hero rescuing a princess in a castle. Its depiction of contemporary Spain is laughable, rendering it as a community filled with medieval, dull-witted farmers; hooded cult members; and preening villains who seemed to have stumbled out of the Napoleonic Wars. The plot is a Super Mario adventure crossed with straight-to-video thriller nonsense, to immensely satisfying effect. Leon Kennedy brings much of the magic; he looks like he was transplanted from the set of a mid-2000s boy-band video, his blond, hairspray-slicked locks falling perfectly over his forehead as he takes aim at hordes of enemies. Midway through the game, Ashley becomes Leon's chirpy companion, wearing impractical boots and constantly needing assistance.
My deep fear with the Resident Evil 4 remake was that a lot of this cartoonish nonsense would be scrubbed away. The trend, these days, is for video games to be serious, and lavish, "triple A" games that aim for the highest sales are the most serious of all. The Last of Us, recently adapted with fawning faithfulness by HBO, is a prime example of that category, featuring a dense script, motion-capture performances by great actors, and some meta ruminations on gaming's violent themes and the bleakness of the horror genre. The original Resident Evil 4 arrived before any of that forced navel-gazing; I wondered if this latest remake would have to indulge it, in order to justify the high retail price and expansive budget.
[Read: The Last of Us Part II tests the limits of video-game violence]
Thankfully, all of the updates are instead focused on the technical side of things. The game nicely reflects the huge strides the industry has made in physics and artificial intelligence. Villains no longer stagger toward you brainlessly, but dodge and swarm with surprising grace. The game demands accuracy and rewards lateral thinking, encouraging you to not just mow down villains with your weapons but lure them into traps or even pit them against each other. The environment remains hilariously frozen in time, with Leon in his Kevlar vest navigating a world of wooden huts and stone citadels, but it's illustrated beautifully, be it a fortress dimly lit by torches or a farm strewn with zombie corpses.
The dizzy heights of modern gaming do not bother me. I am a huge fan of extravagant works such as The Last of Us, Red Dead Redemption, and The Witcher, and I'm endlessly impressed with how games keep advancing their story lines to match their glitzy tech. But Resident Evil 4 does something that the high-concept blockbusters can't always achieve: It lets me unplug my brain. I can just have fun for an hour. Not everything needs to be elevated; narrative foolishness mixed with pitch-perfect gunplay and heaps of gore will always be an easy way to get me to pick up a controller. I look forward to the next remake, likely to arrive a decade-plus from now, as long as it keeps Leon as airheaded as ever.
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37453-5
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37524-7The negative electrode reversibility limits the lifespan of Zn metal batteries. Here, authors report an aqueous electrolyte with a reverse micelle structure that improves the reversibility of the Zn metal anode enabling the production of an ampere-hour-level pouch cell with five months lifetime.
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37459-zParabacteroides distasonis (P. distasonis), part of the gut microbiome, was reported to play a role in diabetes, colorectal cancer and inflammatory bowel disease. Here the authors report that P. distasonis ameliorates liver
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37389-wBacteria of the genus Streptomyces have complex life cycles involving cellular differentiation and multicellular structures that have never been observed in archaea. Here, the authors show that several halophilic archaea display a life cycle resembling that of Streptomyces bacteria, undergoing cellular differentiation into mycelia and spores.
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32226-yThe feces of sea urchins as food improves survival, growth, and resistance of small sea cucumbers Apostichopus japonicus in summer
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-31826-yComposition and function of the Galapagos penguin gut microbiome vary with age, location, and a putative bacterial pathogen
- Electric Bike and Electric Scooter Deals
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37610-wIt is debated whether the pervasive intergenic transcription from eukaryotic genomes has functional significance. Here, Xu et al. find that only 1–5% of yeast intergenic transcription is unattributable to chance promoter activity or neighboring gene expression.
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-31920-1Dynamical dissipative and radiative flow of comparative an irreversibility analysis of micropolar and hybrid nanofluid over a Joule heating inclined channel
The dilemma for the Republican Party is that Donald Trump's mounting legal troubles may be simultaneously strengthening him as a candidate for the GOP presidential nomination and weakening him as a potential general-election nominee.
In the days leading up to the indictment of the former president, which Manhattan District Attorney Alvin Bragg announced two days ago, a succession of polls showed that Trump has significantly increased his lead over Florida Governor Ron DeSantis, his closest competitor in the race for the Republican nomination.
Yet recent surveys have also signaled that this criminal charge—and other potential indictments from ongoing investigations—could deepen the doubts about Trump among the suburban swing voters who decisively rejected him in the 2020 presidential race, and powered surprisingly strong performances by Democrats in the 2018 and 2022 midterms.
"It is definitely a conundrum that this potentially helps him in the primary yet sinks the party's chances to win the general," says Mike DuHaime, a GOP strategist who advises former New Jersey Governor Chris Christie, a potential candidate for the 2024 Republican nomination. "This better positions [in the primary] our worst candidate for the general election."
That conundrum will only intensify for Republicans because it is highly likely that this is merely the beginning of Trump's legal troubles. As the first indictment against a former president, the New York proceeding has thrust the U.S. into uncharted waters. But the country today is not nearly as far from shore as it may be in just a few months. Trump faces multiple additional potential indictments. Those include possible charges from Fulton County, Georgia, District Attorney Fani Willis, who has been examining his efforts to overturn the 2020 election results in that state, as well as the twin federal probes led by Special Counsel Jack Smith into Trump's mishandling of classified documents and his efforts to block congressional certification of President Joe Biden's victory.
"I think I had a pretty good track record on my predictions and my strong belief is that there will be additional criminal charges coming in other places," says Norm Eisen, a senior fellow at the Brookings Institution. "I think you are going to see them in Georgia and possibly [at the] federal" level.
The potential for such further criminal proceedings is why many political observers are cautious about drawing too many firm conclusions from polling around public reaction to this first indictment, which centers on Trump's payment of hush money to the porn star Stormy Daniels late in the 2016 campaign.
[Read: The first electoral test of Trump's indictment]
The multiple legal nets tightening around Trump create the possibility that he could be going through one or even multiple trials by the time of next year's general election, and conceivably even when the GOP primaries begin in the winter of 2024. In other words, Trump might bounce back and forth between campaign rallies in Iowa or New Hampshire and court appearances in New York City, Atlanta, or Washington, D.C. And such jarring images could change the public perceptions that polls are recording now.
"You are just looking at a snapshot of how people feel today," Dave Wilson, a conservative strategist, told me.
Yet even these initial reactions show how Trump's legal troubles may place his party in a vise.
Polls consistently show that Trump, over the past several weeks, has widened his lead over DeSantis and the rest of the potential 2024 field. That may be partly because Trump has intensified his attacks on DeSantis, and because the Florida governor has at times seemed unsteady in his debut on the national stage.
But most Republicans think Trump is also benefiting from an impulse among GOP voters to lock arms around him as the Manhattan investigation has proceeded. In an NPR/PBS NewsHour/Marist College poll released this week, four-fifths of Republicans described the various investigations targeting Trump as a "witch hunt," echoing his own denunciation of them. "There's going to be some level of emotional response to someone being quote-unquote attacked," Wilson said. "That's going to get some sympathy points that will probably bolster poll numbers."
Republican leaders, as so many times before, have tightened their own straitjacket by defending Trump on these allegations so unreservedly. House GOP leaders have launched unprecedented attempts to impede Bragg's investigation by demanding documents and testimony, and even Trump's potential 2024 rivals have condemned the indictment as a politically motivated hit job; DeSantis may have had the most extreme reaction by not only calling the indictment "un-American" but even insisting he would not cooperate with extraditing Trump from Florida if it came to that (a pledge that is moot because Trump has indicated he plans to turn himself in on Tuesday).
As during the procession of outrages and controversies during Trump's presidency, most Republicans skeptical of him have been unwilling to do anything more than remain silent. (Former Arkansas Governor Asa Hutchinson, a long-shot potential 2024 candidate, has been the most conspicuous exception, issuing a statement that urged Americans "to wait on the facts" before judging the case.) The refusal of party leaders to confront Trump is becoming a self-fulfilling prophecy: Because GOP voters hear no other arguments from voices they trust, they fall in line behind the assertion from Trump and the leading conservative media sources that the probes are groundless persecution. Republican elected officials then cite that dominant opinion as the justification for remaining silent.
But while the investigations may be bolstering Trump's position inside the GOP in the near term, they also appear to be highlighting all the aspects of his political identity that have alienated so many swing voters, especially those with college degrees. In that same NPR/PBS NewsHour/Marist survey, 56 percent of Americans rejected Trump's "witch hunt" characterization and described the investigations as "fair"; 60 percent of college-educated white adults, the key constituency that abandoned the GOP in the Trump years, said the probes were fair. So did a slight majority of independent voters.
In new national results released yesterday morning, the Navigator project, a Democratic polling initiative, similarly found that 57 percent of Americans, including 51 percent of independents, agreed that Trump should be indicted when they read a description of the hush-money allegations against him.
[Read: What Donald Trump's indictment reveals]
The Manhattan indictment "may keep his people with him, it may fire them up, but he's starting from well under 50 percent of the vote," Mike DuHaime told me. "Somebody like that must figure out how to get new voters. And he is not gaining new voters with a controversial new indictment, whether he beats it or not." Swing voters following the case in New York, DuHaime continued, "may not like it, they may think Democrats have gone too far, and that might be fair." But it's wishful thinking, he argues, to believe that voters previously resistant to Trump will conclude they need to give him another look because he's facing criminal charges for paying off a porn star, even if they view the charges themselves as questionable.
The NPR/PBS Newshour/Marist survey underlines DuHaime's point about the limits of Trump's existing support: In that survey, a 61 percent majority of Americans—including 64 percent of independents and 70 percent of college-educated white adults—said they did not want him to be president again. That result was similar to the latest Quinnipiac University national poll, which found that 60 percent of Americans do not consider themselves supporters of Trump's "Make America Great Again" movement. The challenge for the GOP is that about four-fifths of Republicans said they did consider themselves part of that movement, and about three-fourths said they wanted him back in the White House.
The open question for Trump is whether this level of support, even in the GOP, may be his high-water mark as the investigations proceed. Eisner and John Dean, the former White House counsel for Richard Nixon, both told me they believe that the New York case may be more threatening to Trump than many legal analysts have suggested. "I think that the New York case is much stronger than people perceive it to be," Dean told me yesterday. "We really don't know the contents of the indictment, and we really won't know for a much longer time the evidence behind the indictment."
Whatever happens in New York, Trump still faces the prospect of indictments on the more consequential charges looming over him in Georgia and from the federal special prosecutor. Dean said that Bragg's indictment, rather than discouraging other prosecutors to act, "may have the opposite effect" of emboldening them. Trump "has escaped accountability literally his entire life, and it finally appears to be catching up with him," Dean said. Academic research, he added, has suggested that defendants juggling multiple trials, either simultaneously or sequentially, find it "much harder to mount effective defenses."
Bryan Bennett, the senior director of polling and analytics at the Hub Project, the Democratic polling consortium that conducts the Navigator surveys, says the potential for multiple indictments presents Trump with a parallel political risk: The number of voters who believe he has committed at least one crime is very likely to rise if the criminal charges against him accumulate. "It's hard to imagine any scenario where multiple indictments is useful" to him, Bennett told me.
DuHaime and Wilson both believe that multiple indictments eventually could weigh down Trump even in the GOP primary. "The cumulative effect takes away some of the argument that it's just political," DuHaime said. Each additional indictment, he continued, "may add credibility" for the public to those that came before.
Wilson believes that repeated indictments could reinforce the sense among Republican voters that Trump is being treated unfairly, and deepen their desire to turn the page from him. He likened the effect to someone living along a "Hurricane Alley," who experiences not one destructive storm in a season but several. "The weight of a single hurricane blowing through is one thing," Wilson told me. "But if you have several hurricanes of issues blowing through, you will get conservatives [saying], 'I don't know if I want to continue living in Hurricane Alley' with Trump, and they are going to look at other candidates."
Given Trump's hold on a big portion of the GOP coalition, no one should discount his capacity to win the party nomination next year, no matter how many criminal cases ensnare him. And given the persistent public dissatisfaction with the economy and lackluster job-approval ratings for Biden, no one dismisses the capacity of whoever captures the Republican nomination to win the general election.
The best-case scenario sketched by Trump supporters is that a succession of indictments will allow him to inspire even higher turnout among the predominantly non-college-educated and nonurban white voters who accept his argument that "liberal elites" and the "deep state" are targeting him to silence them. But even the heroic levels of turnout Trump inspired from those voters in 2020 weren't enough to win. For the GOP to bet that Trump could overcome swing-voter revulsion over his legal troubles and win a general election by mobilizing even more of his base voters, Bennett said, "seems to me the highest risk proposition that I can imagine."
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32440-8DeepNAPSI multi-reader nail
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32456-0
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32569-6Molecular pathways identified from single nucleotide polymorphisms demonstrate mechanistic differences in
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-31774-7Longitudinal hierarchical Bayesian models of covariate effects on airway and alveolar nitric oxide
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32630-4Quantifying mutant
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-27964-yWhole microbe arrays accurately predict interactions and overall antimicrobial activity of
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32550-3Validity and reliability of the
Scientific Reports, Published online: 01 April 2023; doi:10.1038/s41598-023-32291-3Comparative transcriptomics of aphid species that diverged > 22 MYA reveals genes that are important for the maintenance of their symbiosis
|submitted by /u/canman44999
- Andy Beshear's veto and passed a sweeping law limiting the rights of transgender youth across the state, both at school and in their personal lives.
This week, GOP lawmakers overruled Kentucky Gov. Andy Beshear's veto and passed a sweeping law limiting the rights of transgender youth across the state, both at school and in their personal lives.
(Image credit: Ryan C. Hermens/Lexington Herald-Leader/TNS via Getty Images)
Sales of super-efficient electric heat pumps are rising in the US. But what are heat pumps? And why do some call them a key climate solution?
(Image credit: Julia Simon/NPR)
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37553-2The East Antarctic Ice Sheet is currently surrounded by relatively cool water but changes in ocean dynamics may lead to warmer ocean water on the shelf in the future. This has the potential to dramatically increase its future sea level contribution.
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37394-zSpatial visualization of metabolites in tissues via mass spectrometry imaging can be prone to user perception bias. Here, the authors report the computational framework moleculaR that introduces probabilistic data-dependent molecular mapping of nonrandom spatial patterns of metabolite signals.
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-36970-7Matter behaves differently at the nanoscale. Here, the author introduces the concept of a disjoining chemical potential for nanoscale thermodynamics, showing that thermodynamic functions depend on the environment, and suggests possible experimental verifications.
Nature Communications, Published online: 01 April 2023; doi:10.1038/s41467-023-37529-2An interfacial assembly strategy was developed to construct single-atom binary Fe/Co-Nx sites with a high accessible site density of 7.6 × 1019 sites per gram which results in increased power densities in fuel cells and Zn/air batteries.
Wondering what you all think. Those with experience in the field or who have done research on findings. Want to learn so many thing and pursue different careers.
What are your expectations for the future? Will AI get us there sooner?
|submitted by /u/chilispicedmango
The "Mother of Dark Matter" was a force of nature—and a forceful advocate for other women who also wanted to dedicate their careers to the cosmos.
So after looking at how bad bard is compared to ChatGPT…
(especially compared to GPT-4):
I'm wondering why does
seem so far behind?
They were the clear leader in the space for a LONG time, they had several huge research papers that allowed this tech to be developed, they have insane funding, I think their revenue is like $270 BILLION from people clicking on ads…
able to come out with things that seem decades ahead?
In the video above both AIs get asked questions from the "Sparks of AGI" research paper by Microsoft.
The questions are really unique so that the data that the AI is trained on would never had those scenarios in them. For example one is to write a letter recommending a presidential candidate, but the letter is written by Gandhi and the candidate is Electron, a subatomic particle.
Google writes a boilerplate letter recommending a presidential candidate, but doesn't seem to grasp the significance of the prompt, it's letter can be applied to any person recommending any other person.
GPT-4 writes out a mind-blowing letter that really seems to grasp and capture the belief of Gandhi while also some really create writing about why an electron would make a great candidate.
There are TONS more examples like this, but Bard mostly falls flat. The thing is Bard would be revolutionary IF chatgpt did not exist.
But in a world with GPT-4 it seems like Bard is a toy compared to OpenAI's stuff.
DISCLAIMER: I own Google stock and I'm getting worried… are they losing their AI edge or is this more of a "sleeping giant" that's about to wake up and create the next big leap in AI?
Reminds me how Thomas Edison got started, his first invention that made big money was the stock ticker tape. Information is key for any investment decision, now and then.
An AI that gets enough data input on wages/compensation could recommend any worker a different job or carrer according to the individual talent and situation.
Match the individual person, his/her education, work history, character against all opportunities out there.
I think the one disadvantage workers suffer from is lack of information what is out there. Stock brokers paid huge dollars for the stock ticker tape for thry got informed in real time about all their options.
All workers should have something similar, an AI app that gets them the best job possible for the highest pay.
Laypeople respond to the theoretical AI extinction threat with reactions ranging from willful ignorance to eye-rolling at Crazy Uncle Eliezer the Conspiracy Theorist.
I personally believe that AI does pose a very significant extinction risk, but I also have a hard time determining the degree of such risk within my own 90 percent confidence interval.
If you listen to AI alignment professionals, the extinction risk is currently close to 100% because alignment initiatives are in their infancies while AI technology is already undergoing a swift puberty.
However, other than the "paperclip factory hypothesis," I have heard precious few actual descriptions of the mechanisms that might cause our extinction.
What are some other plausible scenarios, based on tangible, real-world mechanisms that laypeople could understand, that describe mechanisms by which AI might cause the extinction of our species, and WHY?
A rare species of beetle has been named after former California Gov. Jerry Brown after scientists found one on his ranch.
This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.
Let's begin by assuming you're not planning to watch WrestleMania this weekend. World Wrestling Entertainment (WWE), with its ridiculous bombast and barbaric violence, has turned people off for decades. Yet its popularity—not to mention its profound influence on American culture and politics—persists. Below, I explain why.
But first, here are three new stories from The Atlantic.
- The election you shouldn't look away from
- Something odd is happening with handbags.
- Childbirth is no fun. But an extremely fast birth can be even worse.
And you can find all our coverage of Donald Trump's indictment here.
The Man in the Arena
WWE can be eerily prophetic. Had you watched WrestleMania 23, back in 2007, for instance, you would have seen a future president of the United States, Donald Trump, standing in the ring with a devilish smile, preparing to humiliate the WWE head honcho, Vince McMahon.
Although scores of articles have been written about the connections between wrestling and Trumpism, comparatively little is understood about McMahon—who, in reality, is one of Trump's close friends. (During the 2016 campaign, McMahon was reportedly on the extremely small list of individuals whose phone calls Trump would take in private; his wife, Linda, went on to serve in Trump's Cabinet.) A new biography of McMahon, Ringmaster, came out earlier this week, and I spent some time with its author, Abraham Josephine Riesman, trying to unpack the book's principal argument: that McMahon and WWE led to "the unmaking of America." McMahon reigned over the thorny world of professional wrestling until last summer, when he stepped down from his position as CEO and chairman following an alleged sex scandal and related hush-money payments. (Sound familiar?) He returned as chairman at the beginning of this year, after the WWE's investigation into the allegations concluded.
What McMahon understood better than anyone was that the physical act of wrestling was just one element of what the audience wanted. Millions of people flock to WWE for the monthslong story lines, the operatic entrances, the cheeky backstage drama. Wrestlers seize the mic and deliver fired-up speeches filled with taunts, zingers, and thrilling call-and-response sections. Trump grew up a wrestling fan and mastered these arena-style linguistics. His rallies, his debates, his interviews, his social-media posts—no matter the venue, Trump relied on WWE tactics. When he launched his first presidential campaign back in 2015, this approach was shocking to some. And even more shocking when it worked.
The 45th president is not scheduled to make a cameo at this weekend's WrestleMania. At the moment, he's preparing to turn himself in to the authorities in New York City on Tuesday following yesterday's grand-jury indictment. One of Trump's congressional acolytes, Representative Marjorie Taylor Greene of Georgia, announced that she, too, will be in New York on Tuesday: "We MUST protest the unconstitutional WITCH HUNT!" she tweeted today. Greene has also used WWE tools to propel herself to elected office. Earlier this year, during President Joe Biden's State of the Union address, Greene heckled him, not unlike a WWE fan screaming from the sidelines.
I've watched a lot of old wrestling clips in recent weeks. Specifically, I went down a rabbit hole of interviews with the wrestler Ric Flair. Flair routinely boasted of his alligator shoes, his Rolex watch, his libido. His absurd brag—"I've got a limousine sittin' out there a mile long!"—may or may not make you think of Trump and/or his first press secretary, Sean Spicer, whose use of hyperbole was, shall we say, unrestrained.
I texted some of these outlandish Ric Flair videos to friends. In response, a buddy pointed me to an October 29, 1985, speech from Flair's former wrestling nemesis, Dusty Rhodes, a.k.a. "The American Dream." The grainy YouTube clip of Rhodes's monologue has more than 2.6 million views. It's three and a half minutes long, and worth watching in its entirety.
Whereas Flair's oratory is all "me," Rhodes takes the approach of "we." Rhodes ticks off examples of challenges that everyday Americans face, something that the stylin', profilin' Flair could never understand. His speech has a decidedly Grapes of Wrath feel to it. "Hard times are when the autoworkers are out of work and they tell 'em, 'Go home!'" Rhodes shouts. "And hard times are when a man is workin' a job 30 years—30 years!—they give him a watch, kick him in the butt, and say, 'Hey, a computer took your place, daddy!' That's hard times!" Trump, for all of his abhorrent narcissism, shrewdly uses the "we"—specifically, the us-versus-them—approach in nearly all of his campaign speeches to similar effect. When headlining this month's CPAC conference, he sounded not only like a vengeful pro wrestler, but like someone seething with menace: "I am your retribution."
This year's WrestleMania title match will be between the current champion, the hulking Roman Reigns, and Rhodes's 37-year-old son, Cody. The younger Rhodes is a cocky blonde who leans heavily into American-flag iconography, wears a business suit and power tie, and goes by "The American Nightmare." (Again: Sound familiar?)
WrestleMania used to be available on pay-per-view, but now it's a two-night event streaming on Peacock on April 1 and 2. I am not the die-hard wrestling fan I was back in middle school, but I'll likely dip in and out of the broadcast to catch a few of the monologues, if not the matches. I don't want to go so far as to predict that a future president will enter the ring, as was the case in 2007. But I wouldn't rule that possibility out.
- After a grand jury voted to indict Donald Trump yesterday, he will likely be arraigned on Tuesday. One of his lawyers said that the former president is prepared to go to trial.
- The Minneapolis City Council approved an agreement with the state of Minnesota to revamp its policing system, nearly three years after George Floyd's murder.
- A "high risk" storm alert—a rare weather designation reserved for severe events—was issued for parts of the American Midwest and mid-South.
- The Books Briefing: Maya Chung explores what California means to writers.
- Work in Progress: Derek Thompson unravels why Americans care about work so much.
Explore all of our newsletters here.
The Influencer Industry Is Having an Existential Crisis
By Kaitlyn Tiffany
Close to 5 million people follow Influencers in the Wild. The popular Instagram account makes fun of the work that goes into having a certain other kind of popular Instagram account: A typical post catches a woman (and usually, her butt) posing for photos in public, often surrounded by people but usually operating in total ignorance or disregard of them. In the comments, viewers—aghast at the goofiness and self-obsession on display—like to say that it's time for a proverbial asteroid to come and deliver the Earth to its proverbial fiery end.
Influencers in the Wild has been turned into a board game with the tagline "Go places. Gain followers. Get famous. (no talent required)" And you get it because social-media influencers have always been, to some degree, a cultural joke. They get paid to post photos of themselves and to share their lives, which is something most of us do for free. It's not real work.
But it is, actually. Influencers and other content creators are vital assets for social-media companies such as Instagram, which has courted them with juicy cuts of ad revenue in a bid to stay relevant, and TikTok, which flew some of its most famous creators out to D.C. last week to lobby for its very existence.
More From The Atlantic
- The woolly-mammoth meatball is an all-time great food stunt.
- The rise of AI Taylor Swift
- Photos of the week: a tattoo convention in Ecuador, a sunrise over Rio de Janeiro, and more
Read. "The Vendor of New Hearts," a poem by Colin Channer.
"Once way far in time in a village coiled from stone / I met an elder in a teahouse. He proposed, and I said yes / I'll join you, and we walked together to the vendor of new hearts."
Watch. Swarm, Donald Glover's horror-comedy (on Amazon Prime), has a twisted take on celebrity culture.
Tomorrow, April 1, marks the 20th anniversary of the White Stripes' Elephant, one of the defining rock albums of the new millennium. You surely know the inescapable earworm "Seven Nation Army," but I think the peak of the record is track eight, "Ball and Biscuit," a swaggering garage-blues romp.
P.P.S. An impeccable list of records also turn 20 this year: Radiohead's Hail to the Thief, Jay Z's The Black Album, Death Cab for Cutie's Transatlanticism, OutKast's Speakerboxxx/The Love Below, My Morning Jacket's It Still Moves, Songs: Ohia's The Magnolia Electric Co., and the Strokes' Room on Fire, to name just a few. As you settle into this Friday night, pour yourself a drink and crank the Yeah Yeah Yeahs' Fever to Tell, yet another 2003 banger. Here's a great clip of Karen O and the band crushing "Y Control" on Late Night With Conan O'Brien.
Isabel Fattal contributed to this newsletter.
Three-year-old was on beach in Costa Rica when reptile tried to steal her dessert and bit her hand
It could almost be a fable from Aesop, or a story from the Brothers Grimm: the toddler, the lizard and the cake. But for one small child, whose baked treat was snatched by an iguana, it was a tale with a twist.
Doctors have revealed that the toddler ended up with an unusual
after being bitten by the reptile as it tried to steal a bite of cake.Continue reading…
AI Taylor Swift is mad. She is calling up Kim Kardashian to complain about her "lame excuse of a husband," Kanye West. (Kardashian and West are, in reality, divorced.) She is threatening to skip Europe on her Eras Tour if her fans don't stop asking her about international dates. She is insulting people who can't afford tickets to her concerts and using an unusual amount of profanity. She's being kind of rude.
But she can also be very sweet. She gives a vanilla pep talk: "If you are having a bad day, just know that you are loved. Don't give up!" And she just loves the outfit you're wearing to her concert.
She is also a fan creation. Based on tutorials posted to TikTok, many Swifities are using a program to create hyper-realistic sound bites using Swift's voice and then circulating them on social media. The tool, the beta of which was launched in late January by ElevenLabs, offers "Instant Voice Cloning." In effect, it allows you to upload an audio sample of a person's voice and make it say whatever you want. It's not perfect, but it's pretty good. The audio has some tonal hitches here and there, but it tends to sound pretty natural—close enough to fool you if you aren't paying enough attention. Dark corners of the internet immediately used it to make celebrities say abusive or racist things; ElevenLabs said in response that it "can trace back any generated audio to the user" and would consider adding more guardrails—such as manually verifying every submission.
Whether it's done this is unclear. After I forked over $1 to try the technology myself—a discounted rate for the first month—my upload was approved nearly instantly. The slowest part of the process was finding a clear one-minute audio clip of Swift to use as a source for my custom AI voice. Once that was approved, I was able to use it to create fake audio right away. The entire process took less than five minutes. ElevenLabs declined to comment about its policies or the ability to use its technology to fake Taylor Swift's voice, but it provided a link to its guidelines about voice cloning. The company told The New York Times earlier this month that it wants to create a "universal detection system" in collaboration with other AI developers.
The arrival of AI Taylor Swift feels like a teaser for what's to come in a strange new era defined by synthetic media, when the boundaries between real and fake might blur into meaninglessness. For years, experts have warned that AI would lead us to a future of infinite misinformation. Now that world is here. But in spite of apocalyptic expectations, the Swift fandom is doing just fine (for now). AI Taylor shows us how human culture can evolve alongside more and more complex technology. Swifties, for the most part, don't seem to be using the tool maliciously: They're using it for play and to make jokes among themselves. Giving fans this tool is "like giving them a new kind of pencil or a paintbrush," explains Andrea Acosta, a Ph.D. candidate at UCLA who studies K-pop and its fandom. They are exploring creative uses of the technology, and when someone seems to go too far, others in the community aren't afraid to say so.
[Read: Welcome to the big blur]
In some ways, fans might be uniquely well prepared for the fabricated future: They have been having conversations about the ethics of using real people in fan fiction for years. And although every fandom is different, researchers say these communities tend to have their own norms and be somewhat self-regulating. They can be some of the internet's most diligent investigators. K-pop fans, Acosta told me, are so good at parsing what's real and what's fake that sometimes they manage to stop misinformation about their favorite artist from circulating. BTS fans, for example, have been known to call out factual inaccuracies in published articles on Twitter.
The possibilities for fans hint at a lighter side of audio and video produced by generative AI. "There [are] a lot of fears—and a lot of them are very justified—about deepfakes and the way that AI is going to kind of play with our perceptions of what reality is," Paul Booth, a professor at DePaul University who has studied fandoms and technology for two decades, told me. "These fans are kind of illustrating different elements of that, which is the playfulness of technology and the way that it can always be used in the kind of fun and maybe more engaging ways."
But AI Taylor Swift's viral spread on TikTok adds a wrinkle to these dynamics. It's one thing to debate the ethics of so-called real-person fiction among fans in a siloed corner of the internet, but on such a large and algorithmically engineered platform, the content can instantly reach a huge audience. The Swifties playing with this technology share a knowledge base, but other viewers may not. "They know what she has said and what she hasn't said, right? They're almost immediately able to clock, Okay, this is an AI; she never said that," Lesley Willard, the program director for the Center for Entertainment and Media Industries at the University of Texas at Austin, told me. "It's when they leave that space that it becomes more concerning."
Swifties on TikTok are already establishing norms regarding the voice AI, based at least in part on how Swift herself might feel about it. "If a bunch of people start saying, 'Maybe this isn't a good idea. It could be negatively affecting her,'" one 17-year-old TikTok Swiftie named Riley told me, "most people really just take that to heart." Maggie Rossman, a professor at Bellarmine University who studies the Swift fandom, thinks that if Taylor were to come out against specific sound bites or certain uses of the AI voice, then "we'd see it shut down amongst a good part of the fandom."
But this is challenging territory for artists. They don't necessarily want to squash their fans' creativity and the sense of community it builds—fan culture is good for business. In the new world, they'll have to navigate the tension between allowing some remixing while maintaining ownership of their voice and reputation.
A representative for Swift did not respond to a request for comment on how she and her team are thinking about this technology, but fans are convinced that she's listening. After her official TikTok account "liked" one video using the AI voice, a commenter exclaimed, "SHES HEARD THE AUDIO," following up with three crying emoji.
TikTok, for its part, just released new community guidelines for synthetic media. "We welcome the creativity that new artificial intelligence (AI) and other digital technologies may unlock," the guidelines say. "However, AI can make it more difficult to distinguish between fact and fiction, carrying both societal and individual risks." The platform does not allow AI re-creations of private people, but gives "more latitude" for public figures—so long as the media is identified as being AI-generated and adheres to the company's other content policies, including those about misinformation.
But boundary-pushing Swift fans can probably cause only so much harm. They might destroy Ticketmaster, sure, but they're unlikely to bring about AI armageddon. Booth thinks about all of this in terms of "degrees of worry."
"My worry for fandom is, like, Oh, people are going to be confused and upset, and it may cause stress," he said. "My worry with [an AI fabrication of President Joe] Biden is, like, It might cause a nuclear apocalypse."
Nature, Published online: 31 March 2023; doi:10.1038/d41586-023-00900-wAdvocates ask the US biomedical agency to rethink the design of its RECOVER initiative, citing possible harm and funding waste.
I hear a lot of FUD about AI replacing humans and putting them out of work, or people using it to create false narratives with deep fakes and whatnot.
One use case that I think would be incredible is concerning social services. You've heard a ton of horror stories about foster kids being put into homes where they are abused, mistreated, molested, starved, and in some case, even end up dead. Many foster parents abuse the system by taking on children and misappropriate the money they receive that should be going to the children's needs. The worst part, this goes unchecked because there aren't enough social workers to check in and make house visits. Which I get, its a shit job that pays little to nothing.
Imagine if ChatGPT could be use to conduct virtual house checks. Maybe monthly or even quarterly, foster parents and the children are required to join a virtual meeting with a camera. I'm sure ChatGPT could even be trained to pick up on facial and voice queues to identify lies. In the instances where a red flag is raised, an in person visit could be scheduled.
Might be dumb, but I certainly believe there is a need and that technology could hold the answer.
I want to begin by stating that what follows is an opinion/speculation. Also, English is not my first language, so I apologize for any spelling mistakes or ambiguity of reason.
- Jobs and the economy.
Current AI systems are not as potentially dangerous as AGI. Nevertheless, they have a tremendous capacity to create a dystopia, not by direct action as AGI, but by the side effects that they produce. Most who predict that LlMS and other current AI systems could increase GDP by an X amount (7%/year as predicted in a CNN article) do so assuming that all the unemployed will be able to find a new job (factory worker example tells us they do not) with similar pay, or that new jobs will be created and that disruption to the system will be minimal or kept in check.
The problem is, I think, that this time, unlike in the past, there will be very few new jobs created simply because AI will be able to do them too. The majority of office jobs, and some other white collar jobs, will be gone and will not reinvent, why would you create a job and pay a human when AI could do that too? Also on this point, in case new jobs do appear, will they do so at the same pace that old ones disappear? Are they gonna be in sufficient numbers to accommodate all the unemployed? Will the wage for such a job be good or is it going to be low since there will be a lot of potential employees? Can we retrain all these people, can they be retrained? Are we gonna hear again the idiotic statement told to truck drivers, "just learn to Code, bro" (I think we already do with, "just become an AI engineer, bro")? Are the people that we retrain gonna have with what to live, in the time that they are retrained (with some of them doing so multiple times in a relatively short time as AI will come for more and more jobs)? As some of these new unemployed will not be able to pay their mortgages or other forms of credit, will some banks be able to survive another crisis? Is economic reflexivity true and expectation of another crisis caused by unemployment cause even more crisis?
Another point to talk about is the effect this mass unemployment will have on the sectors of the economy not affected directly by automation. For starters, all these sectors will see a decrease in their income as there will simply be fewer consumers (even health care as poor people don't use health services unless absolutely necessary) which will lead to a decrease in the wages for the employed and a reduced need for new hires for the company (possibly even layoffs). But the still employed have another problem to deal with as all these sectors will see a flood of potential new employees and so decreasing wages even further, why keep someone for 1500 $/month when someone accepts 1000? All of this will reinforce the crisis as the still employed will earn less and therefore consume less and businesses will make less and thus pay less and hire less.
It's true that there is no fixed amount of jobs in an economy but it's absurd to assume that there is an infinite amount of them or that AI as we have it now cannot do a vast majority of them.
- The effects on critical thinking and intelligence in humans.
As these tools make themselves more present in the everyday life of people we will see a rise in a sort of intellectual atrophy. AI would do the thinking for most people, AI will take the decision for most people, AI will direct a vast portion of the population into a sort of conveyor belt humans, same ideas, same words, same goals, and so on.
While it's true that LlMs can improve education, we should look behind the "better teachers". With these new tools, there is no point in learning information beyond what is absolutely necessary. Everything will be offered to the young mind on a plate, information comes without any sort of experience, even the minimal social experience of learning, and so learning becomes an action without an actual need to keep the new information, basically learn something now, pass the test but forget it 2 days after it. Information and learning will be relegated to a hobby, and as we saw and see with a lot of people, learning beyond the bare minimum is not that appealing (I am sorry if this sounds elitist). These systems do not address the actual problems of our education systems, it just changes human teachers with AI ones. Even the errors in the information that human teachers provide sometimes are kept, we can think of it as replacing a system with 90% accuracy with a new system that provides a 90% accuracy, great success. We have had great access to information for at least 20 years but we don't seem to be smarter than we were in the 90s or 80's so even more access to information (fake or otherwise) is unlikely to change something.
- The effects on democracy.
But the nature of information will not always be accurate, fake news will see an alarming increase, both in written and video form. Imagine the tools we have today in 2016, Trump vs Hilary, both groups would make fake news about the opposing party, and most of their sympathizers will believe them not because they believe the validity of this news or the reasons presented in them, but because they want to believe them, they want to believe that the other is the devil incarnate. Add to this the radicalization of a great % of the population as we saw since 2016 in the US and EU, the us vs them mentality that arouse from it and you have the perfect storm, a brave new world where truth is somewhere buried in a pile of lies and very few are searching for it.
Secondly, all the unemployed will probably not be helped by the state (UBI is great but unlikely to be implemented as the 1% will not allow it and democracy has been eroded by them for quite some time now) and will become resentful, without hope for the future, depressed, and radicalized. All these feelings and the atrophy of the intellect will make these people the types of people that are likely to vote or support extremists, and initially, fake saviours (like Trump and others) and when that fails and these saviours do the opposite of what the people want, and these saviours prove to be charlatans, the resentful will turn to the true dictators and extremist as we saw in 1932 in Germany.
That is it for us in the current way we live. Every generation thought of themselves as the last generation but I think it might be true in our case. If we reach a point similar to 1932 Germany I think nuclear war is 100% likely, and if that does not happen we should keep in mind that resources will still be consumed and eventually run dry and that the climate is getting worse and is unlikely to be solved by the current system or the possible dystopian one that comes after it (keep in mind we don't have AGI yet to fix all the problems for us).
Currently the solution, I think, is to ban these tools from most of the workforce (no ban in research, science and military, military because no country would ban it and risk other going ahead) and introduce them a few at a time in small steps so as to allow the job market and the world to adapt to the new realities. Also, I think we should start implementing at least something like a negative income tax (as proposed by Milton Friedman) to cover for the increase in unemployment and the effects will have on the economy and democracy. I agree that the current system is not totally worth saving, but I think we should find ways to replace it with something better and not with "yea but we created a lot of value for our shareholders".
What do you guys think? How does the future looks like? Also, sorry for the long post, here's a potato.
I'm thinking about this topic and there is a good solution:
In recent years, the development of AI security cameras has garnered significant attention, primarily due to concerns surrounding privacy and bias. The new AI Security Camera system addresses these issues by offering an ethical and unbiased solution that ensures public safety while respecting individual privacy.
The AI Security Camera system employs advanced algorithms and movement recognition technology to monitor public spaces without explicitly identifying individuals. When the system detects a crime in progress, it assigns a temporary ID to the suspected criminal based on a combination of non-personal characteristics. These temporary identifiers include facial features, gait analysis, clothing patterns, body posture, and other unique, observable traits that do not intrude on personal data or privacy.
As the suspect moves through the camera network's field of view, the AI system continuously updates the individual's location, enabling law enforcement to respond quickly and efficiently. Importantly, this system does not access any personal information, ensuring that privacy is maintained throughout the process.
To minimize the potential for bias, the AI algorithms used in the system are designed and trained to focus on objective and observable traits. This approach eliminates the risk of profiling based on race, gender, or other sensitive attributes. Additionally, ongoing audits and transparent reporting measures are in place to ensure that the AI system remains unbiased and effective in identifying criminal activity.
Once law enforcement apprehends the suspect, the AI Security Camera system provides a comprehensive record of the individual's movements since the crime occurred. This data serves as valuable evidence, potentially aiding in the prosecution and resolution of the case.
In conclusion, the AI Security Camera system represents a significant advancement in public safety technology. By focusing on non-personal identifiers and maintaining a commitment to ethical and unbiased practices, this system offers an innovative and privacy-conscious solution to crime prevention and detection.
|submitted by /u/GodAndGaming123
No Free Clout
The swagged out Pope was just too much to handle. Not long after AI-generated images of Pope Francis in immaculate white drip went viral,
— which was used to create the images — has announced that it's pulling the plug on free trials, as spotted by The Washington Post. Temporarily, anyway.
"Due to a combination of extraordinary demand and trial abuse," CEO and founder David Holz wrote in the official Midjourney Discord, "we are temporarily disabling free trials until we have our next improvements to the system deployed."
According to Holz, Midjourney staff experimented with adding new safety measures to combat abuse, but "they didn't seem to be sufficient," he wrote.
Its paid service, which costs as little as $10 per month, remains untouched.
When the pictures of the Pope clad in Balenciaga garb made rounds over the past week — not to mention comical depictions of former president Donald Trump getting arrested — many expressed equal parts alarm and bemusement at how convincing they appeared. These images were, to many, the viral harbingers of a new era of effortlessly generated misinformation.
And you can probably thank Midjourney's latest V5 update from earlier this month for their prowess. Midjourney is now capable of being a little too convincing to the uncritical eye, finally (almost) nailing human hands. Overall, its output simply appears far more coherent and lifelike than before.
Money's the Game
But Holz claims the discontinuation of free trials actually has nothing to do with the viral disinformation ticking time bombs. The main motivator, according to him, was profit.
"We stopped trials because of massive amounts of people making throwaway accounts to get free images," Holz told Gizmodo.
"This happened at the same time as a temporary [graphics card] shortage," he added. "The two things came together and it was bringing down the service for paid users."
Holz says that Midjourney is trying to iron out its moderation, but so far, little has been done beyond banning the word "arrested" in its prompts, according to The Verge.
"I think we're still trying to figure out what the right moderation policies are," Holz told Gizmodo. "We already have some new systems coming which should ship soon."
Midjourney V5 has barely been out for two weeks, so if the recent crop of convincing images of celebrities are anything to go by, those "new systems" can't come soon enough.
More on AI: GPT-4 Was Deeply Racist Before OpenAI Muzzled It
The post Midjourney Nixes Free Generations After AI Pope Images Go Viral appeared first on Futurism.
- Some states have passed laws that will continue to require coverage of ACA-mandated preventive services even if the federal protections are eliminated.
A federal judge's ruling that invalidates part of the Affordable Care Act could mean people will have to pay for certain types of preventive care, though likely not immediately
A linguistic puzzle, ancient DNA, the origins of bipedalism, and much more in this month's Quick Hits