The future of #Windows is #Cloud & #AI. Yesterday we announced #Researcher with Computer Use in #Microsoft365Copilot. A major leap toward more autonomous AI that works for you. Now with #ComputerUse, #Researcher goes beyond reasoning and researching. It can now act on your behalf using a secure virtual computer to navigate #public, #gated, and #interactive web content. This powerful extension of Researcher—combined with the unique ability to connect to your work data—unlocks smarter research, deeper insights, and more comprehensive reports. And #WindowsCloud is providing the foundation so Researcher and other #ComputerUseAgents to perfom these actions. Read more here: https://lnkd.in/d3Zxbsyu Share your favourite research prompt in the comments? #UnlockedPotential #Windows365 #FutureOfWork
Ben Martin Baur’s Post
More Relevant Posts
-
A full LLM—Granite-4.0-Micro—directly in browser 🤓 🧠 Decentralized AI in Action: Granite-4.0-Micro via WebGPU No server. No cloud. Just WebGPU and a modern browser. Why does this matter? - 🔐 Privacy-first AI: No data leaves your device. Ideal for families, schools, and local workflows. - ⚡ On-device inference: No setup, no login. Just open a tab and start prompting. - 🌍 Decentralized potential: This is how AI can scale regionally—without compromising autonomy or control. AI that empowers rather than extracts. Granite-4.0-Micro proves that even nano models can deliver meaningful intelligence—locally, securely, and affordably. IBM Granite and Hugging Face —thank you for making it real. Build AI that respects context, celebrates independence, and runs where it matters most. OpenAI, Perplexity DecentralizedAI #WebGPU #Granite4 #OnDeviceLLM #PrivacyTech #EdgeAI #BrowserInference #AIForEveryone
To view or add a comment, sign in
-
🚀 Huge news in the AI world and I couldn’t be more excited to share it! AWS and OpenAI have just announced a multi-year, $38B strategic partnership to power the next wave of AI innovation. Under this agreement, OpenAI will run its workloads on AWS’s world-class infrastructure — from Amazon EC2 UltraServers packed with hundreds of thousands of NVIDIA GPUs to the ability to scale to tens of millions of CPUs. This collaboration will fuel everything from ChatGPT inference to training next-gen foundation models and scaling agentic AI workloads — with capacity coming online through 2026 (and beyond). This is what happens when innovation meets scale. It’s another proof of why leading AI organizations trust AWS to build, train, and deploy their most demanding workloads — securely, efficiently, and at global scale. The next chapter of AI is being written!!!! More info below: https://lnkd.in/dP_fj-EH #ai #aws #openai
To view or add a comment, sign in
-
-
💡 A $38B Amazon Web Services (AWS)- OpenAI partnership isn't just massive, it's the infrastructure rocket fuel for scaling agentic AI from inference to next-gen training. In ops, this means seamless, secure compute at unprecedented levels, letting us deploy AI innovations faster while keeping costs and reliability in check— a game-changer for efficiency-driven teams. How might this accelerate your AI experiments? #AI #CloudScale #OpsInnovation
🤖 Generative AI Lead @ AWS ☁️ (150k+) | Startup Advisor | Public Speaker | AI Outsider | Founder Thinkfluencer AI
🚀 Huge news in the AI world and I couldn’t be more excited to share it! AWS and OpenAI have just announced a multi-year, $38B strategic partnership to power the next wave of AI innovation. Under this agreement, OpenAI will run its workloads on AWS’s world-class infrastructure — from Amazon EC2 UltraServers packed with hundreds of thousands of NVIDIA GPUs to the ability to scale to tens of millions of CPUs. This collaboration will fuel everything from ChatGPT inference to training next-gen foundation models and scaling agentic AI workloads — with capacity coming online through 2026 (and beyond). This is what happens when innovation meets scale. It’s another proof of why leading AI organizations trust AWS to build, train, and deploy their most demanding workloads — securely, efficiently, and at global scale. The next chapter of AI is being written!!!! More info below: https://lnkd.in/dP_fj-EH #ai #aws #openai
To view or add a comment, sign in
-
-
Microsoft still has exclusive access to the OAI models until 2032, which is *coincidentally* how long this new deal lasts. It won't result in GPT5 and the other proprietary models becoming available on Amazon Bedrock until after that. Still, this is great news for both parties, and for OAI customers.
🤖 Generative AI Lead @ AWS ☁️ (150k+) | Startup Advisor | Public Speaker | AI Outsider | Founder Thinkfluencer AI
🚀 Huge news in the AI world and I couldn’t be more excited to share it! AWS and OpenAI have just announced a multi-year, $38B strategic partnership to power the next wave of AI innovation. Under this agreement, OpenAI will run its workloads on AWS’s world-class infrastructure — from Amazon EC2 UltraServers packed with hundreds of thousands of NVIDIA GPUs to the ability to scale to tens of millions of CPUs. This collaboration will fuel everything from ChatGPT inference to training next-gen foundation models and scaling agentic AI workloads — with capacity coming online through 2026 (and beyond). This is what happens when innovation meets scale. It’s another proof of why leading AI organizations trust AWS to build, train, and deploy their most demanding workloads — securely, efficiently, and at global scale. The next chapter of AI is being written!!!! More info below: https://lnkd.in/dP_fj-EH #ai #aws #openai
To view or add a comment, sign in
-
-
💡 A $38B Amazon Web Services (AWS) - OpenAI deal isn't hype—it's the compute backbone that could supercharge AI from chatbots to agentic systems at global scale. In ops, this means more reliable, cost-effective infrastructure for deploying AI pilots, easing the bottleneck between innovation and rollout. How could this shift the way your team scales AI experiments? #AI #CloudInnovation
🤖 Generative AI Lead @ AWS ☁️ (150k+) | Startup Advisor | Public Speaker | AI Outsider | Founder Thinkfluencer AI
🚀 Huge news in the AI world and I couldn’t be more excited to share it! AWS and OpenAI have just announced a multi-year, $38B strategic partnership to power the next wave of AI innovation. Under this agreement, OpenAI will run its workloads on AWS’s world-class infrastructure — from Amazon EC2 UltraServers packed with hundreds of thousands of NVIDIA GPUs to the ability to scale to tens of millions of CPUs. This collaboration will fuel everything from ChatGPT inference to training next-gen foundation models and scaling agentic AI workloads — with capacity coming online through 2026 (and beyond). This is what happens when innovation meets scale. It’s another proof of why leading AI organizations trust AWS to build, train, and deploy their most demanding workloads — securely, efficiently, and at global scale. The next chapter of AI is being written!!!! More info below: https://lnkd.in/dP_fj-EH #ai #aws #openai
To view or add a comment, sign in
-
-
🚀 Huge news in the AI world and I couldn’t be more excited to share it! AWS and OpenAI have just announced a multi-year, $38B strategic partnership to power the next wave of AI innovation. Under this agreement, OpenAI will run its workloads on AWS’s world-class infrastructure — from Amazon EC2 UltraServers packed with hundreds of thousands of NVIDIA GPUs to the ability to scale to tens of millions of CPUs. This collaboration will fuel everything from ChatGPT inference to training next-gen foundation models and scaling agentic AI workloads — with capacity coming online through 2026 and beyond. This is what happens when innovation meets scale. It’s another proof of why leading AI organizations trust AWS to build, train, and deploy their most demanding workloads — securely, efficiently, and at global scale. The next chapter of AI is being written!!!! More info below: https://lnkd.in/dP_fj-EH #ai #aws #openai
To view or add a comment, sign in
-
-
🚀 What if the most powerful AI models could work **in the cloud** without ever seeing *your* data? A recent announcement from Google introduced **Private AI Compute**, a new cloud‑based platform that pairs Gemini’s top‑tier models with the same privacy guarantees we’ve come to expect from on‑device AI. **Why this matters:** - **Full‑speed intelligence** – Cloud TPUs give Gemini the compute it needs for advanced reasoning that phones can’t handle alone. - **Zero‑knowledge processing** – Remote attestation and end‑to‑end encryption create a sealed enclave; even Google can’t read your inputs. - **Seamless integration** – The same stack that powers Gmail and Search now backs features like Pixel’s Magic Cue and multilingual Recorder summaries, delivering smarter suggestions without compromising privacy. In practice, this means you could get hyper‑personalized, proactive assistance—think “suggest the best time to schedule a meeting” or “summarize a long call in another language”—while your personal context stays locked to you alone. I’m excited because it bridges the gap between **helpful AI** and **responsible data stewardship**. As we rely more on AI to anticipate our needs, having a trustworthy compute layer will be the differentiator between hype and real value. Curious how this could reshape your product roadmap or data strategy? Let’s discuss! #AIPrivacy #ResponsibleAI #CloudComputing #GeminiModels #TechInnovation
To view or add a comment, sign in
-
-
Expanding support for AI developers on Hugging Face: For those building with AI, most are in it to change the world — not twiddle their thumbs. So when inspiration strikes, the last thing anyone wants is to spend hours waiting for the latest AI models to download to their development environment. That’s why today we’re announcing a deeper partnership between Hugging Face and Google Cloud that: * reduces Hugging Face model download times through Vertex AI and Google Kubernetes Engine * offers native support for TPUs on all open models sourced through Hugging Face * provides a safer experience through Google Cloud’s built-in security capabilities. We’ll enable faster download times through a new gateway for Hugging Face repositories that will cache Hugging Face models and datasets directly on Google Cloud. Moving forward, developers working with Hugging Face’s open models on Google Cloud should expect download times to take minutes, not hours. We’re also working with Hugging Face to add native support for TPUs for all open models on the Hugging Face platform. This means that whether developers choose to deploy training and inference workloads on NVIDIA GPUs or on TPUs, they’ll experience the same ease of deployment and support. Open models are gaining traction with enterprise developers, who typically work with specific security requirements. To support enterprise developers, we’re working with Hugging Face to bring Google Cloud’s extensive security protocols to all Hugging Face models deployed through Vertex AI. This means that any Hugging Face model on Vertex AI Model Garden will now be scanned and validated with Google Cloud’s leading cybersecurity capabilities powered by our Threat Intelligence platform and Mandiant. A more open AI Ultimately, we’re committed — through our robust and diverse AI ecosystem — to supporting developers with class-leading AI tools, a choice of AI-optimized infrastructure, and a selection of models in the hundreds; this includes a broad set of open models optimized to run Google Cloud through Hugging Face. This expanded partnership with Hugging Face furthers that commitment and will ensure that developers have an optimal experience when serving AI models on Google Cloud, whether they choose a model from Google, from our many partners, or one of the thousands of open models available on Hugging Face. You can read more on Hugging Face’s blog. 🔗 Google IA #AIDevelopment #HuggingFace #GoogleCloud #MachineLearning #ArtificialIntelligence
To view or add a comment, sign in
-
🚨 Google just dropped something HUGE for AI pros. You can now run LLMs directly on your phone No cloud. No server. No internet. Just pure AI in your hand. ⚡ 🧠 It’s called Edge Gallery, a brand-new app from Google Research Labs. Think of it as your personal AI lab - running Gemma models right on your device. 💡 What you can do: 💬 Chat with an LLM offline 🖼️ Analyze images 🎧 Transcribe audio 🧩 Build and test prompts — all locally ⚙️ How to set it up: 1️⃣ Install Edge Gallery from the Play Store 2️⃣ Connect your Hugging Face account 3️⃣ Authorize Gemma model downloads 4️⃣ Pick your mode → Run it offline 🚫 No subscriptions 🚫 No data collection 🚫 No server latency ✅ 100% private ✅ 100% offline ✅ 100% unlimited This isn’t just another AI app. This is Google bringing LLMs to the edge. To your mobile device. For developers, researchers, and privacy-first professionals - this marks the beginning of the offline AI era. 📱 AI in your hand 💾 Your data stays yours ♾️ No limits. Ever. #AI #Google #EdgeGallery #Gemma #OfflineAI #LLM #OnDeviceAI #TechInnovation #MachineLearning #AndroidAI #PrivacyFirst #HuggingFace #EdgeComputing #AITools
To view or add a comment, sign in
-
🤯 Microsoft just dropped a bombshell! Their new 'Copilot+' PCs are set to redefine personal computing, putting powerful AI capabilities right on your device, not just in the cloud. Think instant recall of everything you've seen on your screen with "Recall," real-time language translation, and incredible image generation – all processed locally. This isn't just an upgrade; it's a paradigm shift towards truly personalized, privacy-centric AI experiences directly from your laptop. Are you ready for your next PC to be an AI powerhouse? What's the first local AI feature you'd want to try? #Microsoft #CopilotPlus #AIPCs #LocalAI #TechNews #FutureofWork #Innovation #IT #AI #MicrosoftBuild
To view or add a comment, sign in