Search
Showing results for “Generative AI”
Jump to a topic
Generative AIGenerative AI is a form of artificial intelligence that can mimic human imagination and creativity. The term "generative" refers to models’ ability to generate original content, including text, video, audio, and more.
Text-based generation relies on large language models that have been trained on massive datasets of public text. These models identify patterns in the sequence of words and create responses one word at a time, resembling an advanced auto-complete tool. Tools that generate images and videos rely on a single system—the generator—producing data that resembles training data, while a second system—a discriminator—distinguishes between generated data and training data. Both systems compete until the discriminator is unable to differentiate between the two sets of data successfully.
As of October 2025, approximately 800 million people use ChatGPT weekly, and new media-generating tools are being introduced daily, increasing the risk of bad actors utilizing them to create disinformation and propaganda.Explore Generative AI
What we've found
Use generative AI to create a musical track from a text promptSuno, a web-based, text-to-music generator, takes a prompt—such as a music style, along with a general lyric idea—and can produce a highly polished, realistic-sounding tune in a matter of seconds. SunoGovernments are now using generative AI to manipulate public opinionA Freedom House report found that AI-powered disinformation campaigns and censorship tactics are spreading, with 16 countries utilizing the technology to shape online narratives or suppress dissent, as of 2023. In Venezuela, deepfake software was used to push pro-government propaganda with fake news anchors. MIT Technology ReviewIndirect prompt injection is a major security flaw of generative AI systemsThese attacks can manipulate AI behavior by hiding instructions in websites and PDFs that become part of training data. The more companies connect sites, services, and sensitive datasets to AI tools, the greater the chances of exposure to malicious code. WIREDGenerative AI may automate nearly 10% of tasks in the US economyThis automation is expected to impact both low- and high-wage roles, with those in customer service, food service, and office support being most likely to be affected. Large-scale investments in upskilling will be necessary to help individuals find new employment opportunities. The McKinsey PodcastUsing generative AI to create video introduces physics-compatibility challengesModels involved in generating video from text must maintain spatial and temporal coherence while adhering to the laws of reality to produce smooth, logical motion. Ensuring this consistency creates long-term dependencies with high computational times. Hugging FaceArt and video produced through generative AI involve competing neural networksGenerative adversarial networks are a type of deep learning architecture that consists of a generator, which creates data that appear real, and a discriminator, which attempts to distinguish authentic from fake data. The two train in a competitive loop until the real and fictional data are indistinguishable. AmazonWhat should CEOs know about generative AI?While AI may mean different things for different companies, this article explains that exploration of AI is a must, not a maybe. It explains that the requirements to start are not prohibitive, and it cautions that the downside of doing nothing is the potential to quickly fall behind competitors. While generative AI may be able to eventually automate some tasks, it appears that its real gains could come from how software vendors are able to embed the technology into everyday tools in order to substantially increase productivity. Read this article to learn more about the business implications of AI and what it could mean for CEOs, businesses, and employees. McKinsey & CompanyHow generative AI differs from machine learningCompared to machine learning - which is sort of like teaching a toddler how to recognize what a dog looks like by showing them images of one - generative AI is like teaching that same child how to actually draw a dog. Unlike machine learning, generative AI can actually generate. Read this article for a more in-depth comparison between these two types of artificial intelligence. LinkedInGenerative AI tools excel at pattern recognition, not contextual accuracyLarge language models are trained on vast amounts of unstructured data, from which they develop parameters for grammar and associations between words. These connections can introduce errors due to inapplicable reasoning when used on new data in unfamiliar contexts. The EconomistGenerative AI, in a nutshellGenerative AI, or generative artificial intelligence, is a form of machine learning that is able to produce text, video, images, and other types of content in a matter of seconds. ChatGPT, DALL-E, and Bard are some of the better-known examples of generative AI applications that produce text or images based on prompts from users. Generative AI functions by training its software models to make predictions and create outputs based on patterns it identifies in vast amount of data. It is capable of producing content across almost every field—including academic writing and translation, composing and sound editing, infographics and image editing, scientific research, and more. Read this article to better understand how generative AI works, how it came to be, and its pros and cons. InvestopediaUnderstanding how Claude Code and other AI coding agents functionUnder human oversight, a supervising large language model interprets user tasks and delegates work to subordinate LLMs, which can generate code, fix bugs, and run tests, often most effectively for proofs-of-concept. Incremental backups and versioning are crucial when using such agents, which can lose details during their work as a result of compressing context history to work around memory limitations. Ars TechnicaNVIDIA named several computer chip architectures after Grace HopperUnveiled in 2023, the GH200 Grace Hopper Superchip, designed for generative AI applications in data center-scale systems, combines a Grace CPU with a Hopper GPU to create a coherent memory model. This allows the Hopper to access Grace data directly, reducing the need to repeatedly copy data back and forth and vastly improving performance. CNETAn AI-built pixel map reimagines New York City as a video gameIsometric NYC is a zoomable pixel-art map that renders the city in game-style perspective (think SimCity), helpfully clarifying the dense geography and architecture. The makers built the project tile-by-tile using satellite data and generative AI. Zoom into building-level views or out to see all five boroughs. Cannon EyedExplore next-generation geothermal projects worldwideThis tool from the Clean Air Task Force allows users to see the location of geothermal power systems to depths of 12.5 kilometers (7.8 miles), where temperatures reach about 450 degrees Celsius (842 degrees Fahrenheit), including proposed, abandoned, and in-development projects. Clean Air Task ForceWhat do people do once GLP-1 medications succeed at weight loss?The drugs have been adopted by 1 in 8 Americans, with many achieving their prescribed weight loss. They now face the question of how to keep the weight off, the key issue with any weight loss program that researchers have known about for decades. Each case is unique. Some folks continue to use GLP-1s like Ozempic in minuscule doses, while others taper them off over time. The ConversationStudies show AI tools can result in passive learning with less retentionAlthough ChatGPT, Google's AI overviews, and other similar software can save time, a study of more than 10,000 adults showed that reliance on them yielded work products that were more generic and included fewer facts than those produced solely by Google search. Weaker brain connectivity has also been observed when users write using AI. Science VsBlurriness in the James Webb Space Telescope's infrared imaging was fixed by AIAustralian researchers developed the Aperture Masking Interferometry Generative Observations AI algorithm to sharpen images affected by electronic distortions in the JWST's Aperture Masking Interferometer. Space.comAs of 2025, the longest lightning bolt in the world spanned several statesAlthough the average bolt measures less than 16 kilometers (10 miles), the 2017 "megaflash" spanned 829 kilometers (515 miles) from eastern Texas to Missouri. Severe thunderstorms are common in the region, where warm, humid air from the Gulf collides with cool, dry air from the north, generating atmospheric instability. NBC NewsNeural networks mimic the structure and behavior of real physical systemsEarly generative AI systems were inspired by scientific models such as the Ising model, which explains how atoms interact with one another and align in magnets. As these networks have expanded, their behavior has more closely resembled that of quantum fields, enabling them to be used in exploring complex particle physics interactions. ScienceClic EnglishSpace tourism rockets emit up to 100 times more CO₂ per passenger than airplanesRocket launches release significant amounts of water vapor, nitrous oxide, and rocket propellants, which generate greenhouse gases and air pollutants. High-altitude emissions can persist for years, affecting the ozone layer. ideas.ted.comOne man used an AI band to conduct an elaborate social experimentIn 2025, a Canadian using the pseudonym Andrew Frelon claimed to have used generative AI to create songs, an album cover, and a profile photo for a fake band he called The Velvet Sundown. Frelon eventually revealed that he lied, and his experiment highlighted the uncertainty surrounding AI. CBCBeat Generation iconoclast William S. Burroughs once appeared in a Nike adFollowing the rise of grunge in 1991, major corporations sought to capitalize on the changing cultural tide by using "authentic" voices to bolster their brands. One of them was Beat writer William S. Burroughs, known for his iconoclastic novels "Naked Lunch" and "Junky," who appeared in a 1994 Nike ad, singing the praises of "the coming of the new technology." What's for afters?Generate a powerpoint filled with 'consulting slop' for your businessCreated by the consulting company NOBL, this AI generator mimics and mocks the vague and impersonal presentations sometimes shared by consulting firms. NOBLWill-o'-the-wisps may be the result of microlightning between bubblesMethane and air bubbles moving through water can generate small sparks between them, which can ignite gas. The above-ground phenomenon may have historically been the result of passing travelers igniting swamp gas with their lanterns. (Some readers may experience a paywall.) Science NewsAI-powered models can provide low-cost, localized and accurate weather forecastingThese models can run on standard laptops, reducing the need for expensive supercomputers and expanding access to developing regions. AI forecasts can help farmers make informed planting decisions, improving crop yields and reducing costs. The ConversationData centers' electrical needs create increased water demands beyond coolingEach query uses approximately one single-serving water bottle per conversation, but water is also used for steam cycles and cooling in power plants that generate electricity for data centers. Newer cooling methods, such as immersion cooling, where servers are submerged in fluids that don't conduct electricity, can minimize water use. The ConversationTransformer architecture can recognize and predict patterns in languageThe underlying software powering text generation in AI tools associates each word or subword—called a token—with a set of values corresponding to how often it appears near other tokens. By recognizing these associations in prompts, the LLM can infer meaning. Financial TimesMore than half of family offices have succession plans in placeThe UBS annual report on family offices found that its clients are preparing for trade wars, generative AI, and generational wealth transfers. WealthbriefingAs of 2025, ChatGPT accounts for more than half of all chatbot trafficSince ChatGPT's debut of its chatbot in November 2022, the daily use of generative AI has skyrocketed. This chart shows which chatbots are the most used around globally, from OpenAI's ChatGPT to Perplexity and DeepSeek. Visual CapitalistAI's growing energy demands are driving tech companies to consider nuclear powerBig Tech has rebranded nuclear power as a green solution to address the strain on the grid from millions of people using power-hungry AI tools. As of mid-2025, generating one image uses as much electricity as charging the average smartphone, or leaving a household light bulb on for 87 consecutive days. The ConversationGoogle chose Santa Barbara to host its Quantum AI computing projectThe project seeks to reduce errors within its quantum computer and integrate 1 million physical qubits (or quantum bits) into the room-sized computer. This powerful device will attempt to solve complex problems in medicine, computing, and more. GoogleAI is learning to be funnyExperts consider humor a particular challenge for large language models to learn, given the skill's complex linguistic play. In an experiment, a stand-up comedian performed half AI-produced jokes, and half human-produced jokes, to no discernible difference in the audience's laughter. Undark MagazineTechnologies labeled “AI” have historically lost that title after widespread adoptionGenerative AI is the latest entry in a recurring cycle where emerging tools start as "AI" until they become common software, like databases or machine learning. Generative AI and large language models may be the next platform shift after smartphones and the Web. SuperAIExplore a 3D model of the experimental setup Rosalind Franklin used to take Photo 51To capture the X-ray diffraction image of DNA from the thymus of a calf, Franklin built a humidity-controlling camera to isolate the B form of DNA. The exposure lasted 60 hours and produced a cross pattern, indicating a helical structure. SketchfabIn defense of the advertisement economyAdvertising is frequently reviled for how it clutters up public spaces and our everyday experiences. Nonetheless, it allows for vast amounts of goods and services to be rendered free to millions of users—and gives users the option to pay with money or time and attention. The DiffListen to the 'world's first' song made by a quantum computer and AIUK startup, Moth, collaborated with electronic artist ILĀ to create the world's first song using quantum-powered generative AI. The technology has the potential to help create more personalized content based on input parameters, such as music or dialogue in games. TheNextWebHurricane conditions are created in a University of Miami wind-wave tankThe SUrge-STructure-Atmosphere INteraction Wind Wave Laboratory contains a 38,000-gallon wave pool connected to a wind tunnel that can generate air currents of up to 155 mph. SUSTAIN allows scientists to study how the atmosphere interacts with the ocean in extreme weather. PBS - Be SmartAI models incorporate a randomness parameter when generating responses to promptsAI models use this parameter to prevent repeated outputs by sometimes choosing less likely next words during sequential generation. However, this randomness—alongside insufficient data and training—may cause hallucinations of incorrect results. Google CloudAgentic AI systems can proactively achieve user goals without user directionUnlike large language models, which do not take follow-up actions after generating their output, agentic AI uses an initial prompt to take multiple steps, learn from its environment and outcomes, and continue working without follow-up human prompting. IBM TechnologyThe hardware powering AI was originally designed for video game graphics processingGraphics processing units are specialized computer chips that can process multiple data streams in parallel, including pixel colors for a display. Such processing is required to find connections across text data and generate outputs quickly. The Wall Street JournalLLMs are the backbone of AI tools that process and generate natural languageLarge language models identify patterns in massive datasets of books, code, and unlabeled text from which to generate coherent responses. GPT-3 was trained on about 45 terabytes of data and uses 175 billion parameters—each parameter being a value the model adjusts as it learns. IBM TechnologyGoogle Search incorporated AI Overviews in 2024 to fight back competitorsAnalysts believe that generative AI tools—not antitrust rulings—are Google's biggest threat because they draw users away from traditional search and potential ad revenue. AI partnerships, like the one between Apple and OpenAI, may further reduce Google's market share. ReutersIn 2024, Meta launched an AI video generator to create or edit videos through promptsThe tool puts the Meta in direct competition with OpenAI and Google as more companies are investing billions of dollars to capitalize on the explosion of public interest in machine learning and generative media technologies. Bloomberg TechnologyAI gained mainstream attention with tools like IBM Watson and Apple’s SiriWith the release of ChatGPT in 2022, which drew over 100 million weekly users in just two months, natural language processing and understanding could be achieved at scale via machine learning. Unlike earlier artificial intelligence that could pull stored knowledge, generative AI produces text, images, or sounds. 1440The ease of deepfakes creation may soon overwhelm our sense of digital truthShortly after the public release of related technology in 2017, some experts saw the technical limitations of deepfakes as preventing them from becoming widespread tools of disinformation. Since then, the neural networks behind diffusion models and generative AI have eliminated the barriers to creating convincing synthetic media for propaganda. The AtlanticWatch 'Airhead,' a short movie created by Sora and post-production FXUsing OpenAI’s video-generation tool Sora, a user can create impressive, hyper-realistic video footage from a simple text prompt. This short film features a whimsical look at a balloon-headed person navigating the world. Shy KidsWatch video samples created by AISora, OpenAI’s video generation tool, can create up to 60 seconds of photorealistic video footage featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions from a text prompt. OpenAILarge language models are examples of foundation modelsSuch models are trained on massive amounts of data, which enable them to perform a variety of tasks, rather than being provided task-specific data to complete a narrow function. Such models come with high compute costs and potential trust issues due to the nature of their unorganized training data. IBMTransformer architecture was the breakthrough that made AI chatbots possibleIt changed how text was translated from a sequential, "one word at a time" method to one where every word in a text is processed in parallel. Advances in positional encoding and self-attention also helped models recognize context and word order better. Google Cloud TechThe basis for AI image and video generation was first introduced in 2014Through generative adversarial networks, the generator (a neural network that creates fictional data) competes against the discriminator (a neural network that assesses the data) to continuously train the former until it can make content indistinguishable from the real thing. Training with unstructured data can enable software to identify characteristics of gender, age, and expression. Arxiv Insights
Try another search?