AI’s Weekend Shockwave: GPT-5 Launch, Chip War Heats Up & ‘Deathbots’ Stir Debate (Aug 10–11, 2025)

Over the past 48 hours, the world of artificial intelligence has seen a cascade of breakthroughs, bold moves, and brewing controversies. OpenAI’s latest GPT-5 model made its much-anticipated debut, even as geopolitical tensions over AI chips escalated between superpowers. Tech giants and startups alike unveiled new initiatives – from Apple’s secret AI search tool to Tesla’s surprise pivot in chip strategy – while governments raced to both harness and regulate AI’s power. In academia and society, whistleblowers and ethicists sounded alarms: the UK’s flagship AI institute faces upheaval, and “digital resurrection” chatbots of the dead are raising profound questions about grief and tech. Below is a comprehensive roundup of the major AI developments from August 10–11, 2025, organized by category.
Research Breakthroughs & New AI Technologies
- OpenAI launches GPT‑5: OpenAI released GPT‑5, the much-anticipated successor to GPT‑4, positioning it as a “significant step” toward artificial general intelligence apnews.com. CEO Sam Altman touted GPT‑5 as providing “legitimate PhD-level expertise on demand” across domains apnews.com. The model is now accessible (with usage limits) to anyone with a free ChatGPT account apnews.com and is being integrated into Microsoft’s Copilot assistant ecosystem apnews.com. While GPT‑5’s performance gains over GPT‑4 are modest on benchmarks, experts say it “resets” OpenAI’s core technology in ways that could set the stage for future leaps apnews.com. Researchers like Cornell’s John Thickstun note GPT‑5 shows meaningful improvements but caution it’s not the apocalyptic “end of work” – there remains “a lot of headroom” for AI to continue to improve apnews.com. The launch comes amid frenzied industry competition (rival Anthropic rolled out a new Claude model just days earlier) and massive capital demands for OpenAI to scale its compute infrastructure apnews.com.
- AI in deep-space medicine: NASA and Google unveiled a proof-of-concept “Crew Medical Officer Digital Assistant” to help keep astronauts healthy during deep-space missions without a human doctor on board washingtonexec.com. Announced in an Aug. 8 Google blog, this AI-driven system draws on spaceflight medical literature and uses natural language processing to provide real-time health analyses and decision support when Earth-based experts aren’t available washingtonexec.com. Early trials – including simulated in-flight medical emergencies – showed promising results: the AI reliably suggested diagnoses based on reported symptoms, evaluated via standard clinical exam frameworks washingtonexec.com washingtonexec.com. Google’s federal AI lead Jim Kelly said the innovation is “pushing the boundaries” of what’s possible, delivering essential care “in the most remote and demanding environments” washingtonexec.com. Part of NASA’s Artemis program efforts, this tool could not only safeguard crews on lunar and Mars expeditions but also foreshadows AI’s potential to bring quality healthcare to remote regions on Earth washingtonexec.com.
- OpenAI goes (partly) open-source: In a nod to the open-model movement, OpenAI debuted two new freely downloadable language models – a 117-billion-parameter model (
gpt-oss-120b
) and a 21B model (gpt-oss-20b
) – that can run on relatively accessible hardware axios.com. Remarkably, the larger model operates on a single high-end GPU (~80 GB VRAM) and the smaller on a typical laptop (~16 GB RAM), yet they deliver performance approaching some modes of ChatGPT axios.com. Why it matters: these “open-weight” models (available via Hugging Face and optimized for Windows) cater to demand for privacy and cost-savings by letting users – and even nations – run advanced AI on their own devices instead of relying on Big Tech’s cloud axios.com axios.com. OpenAI’s CEO Sam Altman frames the release as aligned with the mission to get AI into “the hands of the most people possible,” rooted in “democratic values”, as he told Axios axios.com. Analysts see this as a strategic response to rising competition – e.g. China’s open-model DeepSeek and Meta’s Llama – though critics note the models aren’t fully open-source, since OpenAI is withholding details like training data specifics axios.com.
Business & Industry Updates
- Apple’s secret AI search project: Apple has reportedly formed a new team – ironically codenamed “Answers, Knowledge, and Information” – to develop its own AI-powered “answer engine” techcrunch.com. According to a Bloomberg scoop, the team is building a ChatGPT-like system that can draw information from across the web to answer users’ questions – potentially as a standalone app or baked into Siri, Safari, and other Apple products techcrunch.com. The company has started recruiting search algorithm experts for this effort techcrunch.com. Context: Apple’s move comes amid a cautious approach to AI so far – it integrated some ChatGPT features into Siri, but broader AI upgrades to Siri have been “repeatedly delayed” techcrunch.com. And with Google facing an antitrust order that could loosen its grip on Apple’s default search deal techcrunch.com, Apple appears to be hedging its bets. Industry analysts view this as Apple playing catch-up in generative AI and aiming to lessen its dependence on Google. A bespoke “answer engine” could let Apple control the user experience (and privacy) of AI search, but it enters a crowded arena against Google’s own AI-driven search and OpenAI’s chatbots.
- Tesla streamlines its AI chip ambitions: Elon Musk is refocusing Tesla’s AI hardware strategy, shutting down the “Dojo” supercomputer chip project to concentrate on the company’s next-gen automotive AI chips. A Bloomberg report revealed Musk disbanded the Dojo team – an in-house custom AI training effort once touted as a game-changer for Tesla’s self-driving tech reuters.com. Musk confirmed the pivot on X (Twitter), arguing it “doesn’t make sense for Tesla to divide its resources” between two different AI chip designs reuters.com. Going forward, Tesla will pour all effort into its upcoming AI 5 and AI 6 chips – optimized for real-time inference in cars, but also “pretty good” at training, according to Musk reuters.com. Expert view: This surprise move comes after Morgan Stanley had valued Tesla’s Dojo initiative at up to $500 billion in potential market value (seeing it as Tesla’s AWS-like cloud AI service) reuters.com. By pulling the plug on Dojo, Musk signals a pragmatic recognition that Tesla can’t win at both training and inference hardware simultaneously, and that focusing on AI chips to power Autopilot and the Optimus robots is more immediately critical. Industry analysts note that the Big Tech AI arms race requires hard choices – Tesla appears to be prioritizing delivering AI features to consumers now over competing with NVIDIA in the AI datacenter space.
- AI upheaval in IT services – TCS layoffs: The world’s largest IT outsourcing firms are bracing for an AI-induced shakeup. India’s Tata Consultancy Services (TCS) set off alarm bells by announcing over 12,000 job cuts – about 2% of its workforce – the biggest layoffs in its history reuters.com. Officially, TCS attributed the cuts to “skill mismatches” and an uncertain demand outlook (clients delaying projects) rather than automation. However, industry experts say this is likely the first major wave of AI-driven redundancies in the $283 billion global IT outsourcing sector reuters.com. Phil Fersht, CEO of HFS Research, noted that AI’s impact is “eating into the people-heavy services model”, forcing large providers like TCS to rebalance their workforces to maintain profit margins amid clients demanding 20–30% cost reductions reuters.com. The decision by TCS – known for its traditionally job-stable culture – highlights that even white-collar tech jobs are not immune to AI disruption. Analysts project up to 500,000 jobs in IT services (especially middle-management, testing and routine support roles) could be eliminated in the next 2–3 years as enterprises embrace AI. Commentary: For decades, firms like TCS, Infosys, and Accenture thrived by employing armies of engineers for business process and IT support tasks. Now generative AI and automation threaten many of those roles. Consultants note that AI copilots can handle tasks like code testing, report generation, and helpdesk queries far more efficiently. This puts the onus on workers to “re-skill” for higher-value work – echoing OpenAI’s Sam Altman, who recently remarked he’s more worried about older workers resisting retraining than about Gen Z in the AI era. While AI adoption will boost productivity, the transition could be painful. The TCS cuts – coming from India’s top private-sector employer – underscore that even high-skill tech jobs are vulnerable to AI-driven transformation.
Startup Funding & M&A
- Investors double down on AI startups: The AI funding frenzy shows no sign of cooling. Clay, a startup offering AI-driven sales automation, saw its valuation more than double to $3.1 billion in just three months after a new $100 million funding round reuters.com. Alphabet’s growth fund CapitalG led the raise, and other top VCs (Sequoia, First Round, etc.) piled in reuters.com reuters.com. Just a few months prior, Clay was valued at $1.5 billion – highlighting how quickly investors are inflating AI company valuations in 2025. Broader trend: Global dealmaking in the AI sector for the first seven months of 2025 hit its highest level since the 2021 boom, as investors have poured billions into AI and its applications, betting the technology will “enhance productivity and reduce costs” across industries reuters.com. Major AI players are also amassing war chests – for instance, OpenAI reportedly raised $8.3 billion in a fresh round (part of an unprecedented $40 billion fundraising plan) led by Dragoneer Investment Group techcrunch.com techcrunch.com. Insight: Despite economic uncertainties, AI is being seen as a transformative tech wave akin to the dot-com era, fueling a gold-rush mentality for startups. However, some analysts warn of a possible bubble, noting that not every AI product will live up to its lofty valuation and that a shakeout is likely as the market matures.
- Rumble eyes $1.2 B AI cloud acquisition: U.S. video platform Rumble (known for its YouTube-alternative streaming site) is considering an offer of about $1.17 billion (1 billion euros) for German AI cloud group Northern Data reuters.com. Rumble said a deal would give it control of Northern Data’s GPU-rich cloud division (called Taiga) and its large-scale data center arm (Ardent), with plans to integrate both into Rumble’s own operations reuters.com. Northern Data confirmed on Monday that its board is evaluating Rumble’s potential offer and is open to further discussions reuters.com. The proposed terms – 2.319 Rumble shares for each Northern Data share – value the target at roughly $18.30 per share, a steep ~32% discount to its last closing price, so both companies indicated that any final offer, if made, would likely be at a higher valuation reuters.com reuters.com. Notably, stablecoin platform Tether, the majority shareholder of Northern Data, has expressed support for the transaction (the plan assumes Northern Data will divest its crypto-mining unit beforehand, using the proceeds to pay down a Tether loan) reuters.com. Following the deal, Tether would commit to a multi-year GPU purchase agreement with Rumble, effectively becoming a major customer reuters.com. Big picture: This unusual tie-up underscores the red-hot demand for AI infrastructure. Rumble – a mid-size tech firm known for social media – is looking to instantly boost its cloud AI capabilities by acquiring Northern Data’s trove of high-end NVIDIA chips (over 20,000 H100 GPUs and 2,000+ H200s) reuters.com. The move, if it proceeds, would mark a significant expansion of Rumble’s business into AI cloud services. However, the discussions are still preliminary, and both parties caution there’s no certainty a formal offer will ultimately materialize reuters.com.
Policy, Regulation & Geopolitical Developments
- US–China tech tensions flare over AI chips: The “chip war” between Washington and Beijing intensified ahead of a possible Trump–Xi summit. China is pressing the U.S. to ease export controls on advanced semiconductors critical for AI as part of a broader trade deal negotiation reuters.com. According to an FT report (confirmed by Reuters), Chinese envoys in Washington specifically want curbs lifted on high-bandwidth memory (HBM) chips – technology vital for training AI models – which Chinese firms like Huawei need to develop their own AI processors reuters.com. Successive U.S. administrations (from Biden to Trump) have tightened such exports to stymie China’s progress in AI and defense reuters.com. Geopolitical context: President Donald Trump has cast the U.S.–China AI race as the defining conflict of the 21st century. China’s request underscores that access to AI hardware is now a top-tier bargaining chip alongside tariffs and trade balances. Meanwhile in Beijing, an influential state-media-linked social account (affiliated with CCTV) slammed NVIDIA’s new H20 AI chips – a version tweaked for China after U.S. bans – as unsafe and possibly containing “backdoor” mechanisms for remote shutdown ts2.tech. The post urged Chinese buyers to shun the H20, and China’s cyberspace regulator even summoned NVIDIA for talks amid these security allegations ts2.tech. New twist: Reuters revealed that NVIDIA and AMD have quietly agreed to give the U.S. government 15% of revenue from any advanced AI chip sales to China, as a condition for obtaining export licenses reuters.com. (The U.S. had halted NVIDIA’s high-end H20 shipments in April, but then allowed a limited resumption with this profit-sharing proviso.) One analyst called the arrangement “wild” – observing that either selling such chips to China is a national security risk (in which case it shouldn’t be allowed at all) or it isn’t (in which case imposing a hefty 15% cut is hard to justify) reuters.com. Expert insight: Together, these developments highlight a bifurcating tech world. China is leveraging its market power to demand AI hardware on its terms, while also signaling it won’t fully trust even “approved” foreign chips. The U.S. faces a dilemma – whether to relax chip rules to ease tensions, or double-down on tech restrictions amid hawks’ warnings. For AI industries, the fallout could be increasingly divergent East/West ecosystems if no compromise is reached.
- Washington fast-tracks AI in government: The U.S. General Services Administration (GSA) added OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s upcoming Gemini to its list of approved software tools for federal agencies reuters.com. Announced Aug. 5, these approvals are part of the Trump administration’s new AI blueprint (released July 23) aimed at vastly expanding AI use across government and loosening various regulations, in a bid to maintain the U.S. edge over China reuters.com. With the GSA’s green light, agencies can now readily procure and deploy these AI chatbots and assistants through a centralized platform with pre-negotiated contracts reuters.com. The GSA emphasized it focused on models that “prioritize truthfulness, accuracy, transparency, and freedom from ideological bias” reuters.com. Broader policy: President Trump’s AI Action Plan includes 90+ recommendations – from boosting AI hardware/software exports to allies, to preempting state laws deemed too restrictive to AI innovation reuters.com. It’s a marked reversal from former President Biden’s approach: the Biden administration had required federal agencies to adopt concrete AI safeguards and had even issued an order to ensure AI wasn’t used for misinformation, measures that Trump quickly rescinded upon taking office reuters.com. Commentary: The policy pivot has drawn mixed reactions. Proponents argue it will accelerate innovation and keep U.S. AI competitive globally. Critics worry that a “light-touch” regime will lead agencies to deploy AI systems that haven’t been fully vetted for bias or security. Nonetheless, the federal endorsement of tools like ChatGPT is a milestone – likely to spur more mainstream business and public-sector uptake of AI reuters.com. It also reflects a stark ideological divide in AI governance – one administration’s “high fence” on AI exports and ethics is another administration’s red tape to be torn down reuters.com.
Ethical Debates & Public Response
- Whistleblowers warn of turmoil at top AI institute: A group of employees at the UK’s Alan Turing Institute (ATI) – the country’s flagship AI research center – filed a bombshell whistleblowing complaint alleging governance failures and a “toxic” internal culture lab.engage-365.com lab.engage-365.com. Revealed on Aug. 10, the complaint to the Charity Commission claims ATI’s board of trustees (led by ex-Amazon UK chief Douglas Gurr) has neglected core duties, ignored a staff letter of no confidence, and let a climate of “fear, exclusion and defensiveness” fester lab.engage-365.com. The whistleblowers warn the institute is “in danger of collapse” as the UK government pressures it to shift focus toward defense and national security – even threatening to pull future funding lab.engage-365.com lab.engage-365.com. Indeed, Britain’s technology secretary recently urged ATI to reorient its research to military and “sovereign” tech, making clear that continued funding will be conditional on leadership and strategy changes lab.engage-365.com. In response, ATI’s management has begun a drastic restructuring: roughly 50 staff (about 10% of the workforce) were put at risk of redundancy, and projects on online safety, housing inequality, AI ethics and more are being closed or paused in favor of defense-oriented work lab.engage-365.com lab.engage-365.com. Expert commentary: Governance experts say this episode underscores tensions in publicly funded AI labs – balancing academic independence against political priorities. The ATI case has sparked alarm among researchers who fear a militarization of the institute’s mission (pivoting to defense at the expense of social-good projects) lab.engage-365.com lab.engage-365.com. It also spotlights leadership challenges: multiple senior scientists have quit under ATI’s relatively new CEO, and staff accuse the board of failing to hold management accountable lab.engage-365.com lab.engage-365.com. The Charity Commission has declined to confirm if it will investigate, citing confidentiality around whistleblower reports. Regardless, the crisis has ignited debate in the UK AI community: should a national institute primarily serve government priorities (like defense tech), or uphold a more independent, ethics-focused agenda? The coming months – with funding reviews looming – may determine the institute’s future direction and credibility.
- “Digital resurrection” sparks soul-searching: A strange and uneasy trend is emerging at the intersection of technology and grief – the rise of AI “deathbots” that create digital facsimiles of people who have died. This weekend, The Guardian profiled several jaw-dropping examples. At a recent U.S. concert, Rod Stewart performed on stage alongside AI-generated projections of late rock icons (Ozzy Osbourne, Tina Turner, Bob Marley, and others), leaving some fans awed but others deeply disturbed theguardian.com. Around the same time, journalist Jim Acosta “interviewed” a digital avatar of Joaquin Oliver – a teenager killed in a 2018 school shooting – which the boy’s parents commissioned so they could hear their son’s voice again theguardian.com. And Reddit co-founder Alexis Ohanian posted an AI-generated animation of his late mother hugging him as a child, saying it moved him to tears and that he rewatched it 50+ times theguardian.com. These are early snapshots of what commentators are calling the “digital resurrection” phenomenon – using a person’s photos, videos, voice notes and texts to algorithmically revive their likeness after death theguardian.com. Numerous startups now offer to create such griefbots, raising thorny questions about consent, exploitation, and the psychological impact on the bereaved theguardian.com. Expert views: Psychologists and ethicists are sharply divided. London-based cyberpsychologist Elaine Kasket notes that thanks to today’s readily available large language models, it’s “very straightforward” to build a chatbot that feels uncannily like a lost loved one – especially if you feed it ample “digital remains” (messages, recordings, social media) of the person theguardian.com. Just a few years ago, this kind of “virtual immortality” seemed like sci-fi; now interactive personal avatars can be created relatively easily and cheaply, and demand is expected to grow. However, Dr. Louise Richardson of York University warns that deathbots may disrupt healthy grieving. By offering an illusion of ongoing interaction, “they can get in the way of recognising and accommodating what has been lost, because you can continue to ‘talk’ to the deceased” theguardian.com. There’s also concern that these AIs present a sanitized, idealized version of the person – families might consciously or unconsciously omit negative traits, creating a digital ghost that never fully reflects the real individual. Philosophers have even dubbed this practice “digital necromancy,” cautioning that it’s a deceptive experience: the bereaved think they’re talking to their loved one, but it’s really a machine, and they “can become dependent on a bot, rather than accepting and healing” theguardian.com. On the ethical front, since the dead cannot consent, who has the right to a person’s data and identity after death? Some people now explicitly put “no AI clone” clauses in their wills. Furthermore, a burgeoning “grief tech” industry could exploit vulnerable mourners – in China, for instance, simple photo-based avatars can cost as little as ¥20, while hyper-realistic interactive doubles run up to ¥50,000 (~$7,000). Bottom line: digital resurrection is here, and society is only beginning to grapple with its moral and emotional ramifications. As one ethicist observed, the technology taps into our deepest longing to reconnect – but using AI to confront death might raise more ghosts than it settles. theguardian.com
New Consumer & Enterprise AI Tools
- Microsoft rolls out Copilot 3D: In a quiet but intriguing launch, Microsoft introduced Copilot 3D, a free AI feature that converts 2D images into 3D models theverge.com. Available in the experimental Copilot Labs (preview) for all users – no paid subscription required – Copilot 3D lets anyone upload a standard photo and get back a downloadable 3D object, compatible with game engines, 3D printers, or AR/VR apps theverge.com. Early hands-on tests show it works best with clear, well-lit objects against simple backgrounds. For example, the AI turned IKEA furniture photos and even bananas into fairly accurate 3D models theverge.com. However, it struggles with complex subjects like people or animals: one tester’s attempt to model his pet dog produced hilariously mangled anatomy (Copilot 3D guessed the dog had an extra body part, placing it on his back) theverge.com. This quirk highlights that the feature is still a preview. Why it matters: Copilot 3D lowers the barrier to 3D content creation, hinting at a future where AI can generate immersive media from everyday inputs. Game designers, e-commerce sites, and digital artists could quickly create 3D assets from simple photos, saving significant time and skill. Notably, Microsoft launched Copilot 3D alongside a broader Copilot update – confirming it has now integrated OpenAI’s new GPT‑5 model across Windows, Office, GitHub, and Azure services to enhance their AI capabilities theverge.com. Expert commentary: Tech reviewers call Copilot 3D a fun glimpse of AI’s creative potential for consumers and creators. The fact that it’s free and requires no prompt engineering (just an image upload) makes it very accessible. Still, experts highlight the limitations: it currently can’t handle human faces or realistic figures (guardrails block generating 3D models of real people without consent) theverge.com, and fine details can be hit-or-miss. Microsoft advises using images with simple backgrounds and good depth; even then, some outputs may require cleanup in a 3D editor. As AI-generated 3D improves, we could see an explosion of user-made 3D content in virtual worlds – but for now, Copilot 3D is best suited for basic objects and playful experiments (and perhaps not our beloved pets).
- AI goes enterprise – CRM meets Claude AI: Business software is rapidly embedding AI copilots. This weekend, HubSpot announced a new Claude AI connector that weaves Anthropic’s advanced chatbot into its popular customer relationship management (CRM) platform ts2.tech. The integration allows sales, marketing, and support teams to query their company’s CRM data in natural language and receive AI-generated summaries, insights, or even charts. For example, a user could ask, “Which deals are at risk this quarter and why?” – and Claude, with secure access to relevant internal data, can produce a breakdown drawing on meeting notes, emails, and pipeline records ts2.tech. Crucially, HubSpot’s connector respects role-based permissions and does not feed sensitive customer information back into Anthropic’s model training process ts2.tech. Analyst view: This reflects a wider trend of enterprise software adding “copilot” features atop proprietary data – from Salesforce’s Einstein GPT to Microsoft Dynamics 365 Copilot. By bringing AI directly to business data, vendors promise boosts in productivity and decision-making. However, companies remain cautious about data privacy and accuracy. HubSpot addressed this by keeping the AI entirely behind the customer’s firewall and by limiting it to an advisory role (the AI can suggest insights but not execute changes itself) ts2.tech. Experts say AI-augmented analytics like this can be a game-changer for SMEs lacking data science teams, effectively turning plain English questions into actionable business intelligence. But they also warn about over-reliance on AI interpretations – humans must verify critical insights, since AI may sometimes hallucinate correlations or miss context ts2.tech. Still, the ability to simply ask your CRM for answers, instead of manually crunching reports, is poised to transform workflows in sales and marketing. It marks another step in making advanced AI a day-to-day partner for knowledge workers.
Events & Conferences
- Global AI summit convenes in Vegas: Ai4 2025 – North America’s largest annual AI industry event – kicked off on August 11 at the MGM Grand in Las Vegas, bringing together a who’s who of the AI world. The three-day conference features 600+ speakers and around 8,000 attendees from over 75 countries datacamp.com. Ai4’s agenda spans 21 industry tracks, covering everything from AGI and enterprise AI to healthcare, finance, and government applications. This year’s lineup includes some of AI’s most prominent figures: Geoffrey Hinton – the 76-year-old “Godfather of AI” – returned to the keynote stage, and is joined by luminaries like Fei-Fei Li (sometimes called the “Godmother of AI”) and Emmett Shear (former Twitch CEO and briefly OpenAI’s interim CEO) among hundreds of others ai4.io ai4.io. Organizers say they’ve “elevated” the program to reflect the rapid evolution of generative AI over the past year, but also to address the practical challenges of integrating AI into real-world operations morningstar.com. Major themes include the rise of large language models and autonomous agents, scaling AI infrastructure (the event’s expo floor is buzzing with new silicon and cloud offerings), and ensuring responsible AI use amid calls for regulation. The conference’s opening sessions struck a balance between optimism and caution. In his remarks, Hinton – who famously resigned from Google in 2023 to warn about AI risks – urged companies to prioritize safety research even as they race ahead in AI deployment. Other speakers highlighted AI’s tremendous potential in fields like drug discovery and climate modeling, while acknowledging concerns around bias, IP rights, and workforce impacts. Ai4 2025’s sheer scale and diverse attendance underscore how, in the wake of ChatGPT’s breakout, AI has shifted from niche to mainstream. The event illustrates an industry at a crossroads: exuberant about breakthroughs in generative AI and agentic systems, yet intent on hashing out how to implement these technologies responsibly, securely, and profitably across the global economy morningstar.com.
Sources: OpenAI apnews.com apnews.com; Reuters reuters.com reuters.com; Associated Press apnews.com apnews.com; Reuters reuters.com reuters.com; WashingtonExec washingtonexec.com washingtonexec.com; Reuters reuters.com reuters.com; The Guardian theguardian.com theguardian.com; Reuters reuters.com reuters.com; The Verge theverge.com theverge.com; Engage 365 (via Guardian) lab.engage-365.com lab.engage-365.com; DataCamp datacamp.com; Ai4 2025 Program ai4.io.