Artificial Intelligence

Explore top LinkedIn content from expert professionals.

  • View profile for Pascal BORNET

    #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️

    1,522,425 followers

    Yesterday I had one of those rare conversations that stays with you. I sat down with Dr. Rebecca Hinds, PhD from Glean to unpack her new research, The AI Transformation 100, and it completely reframed how I think about AI in organizations. In the article I just published, I share the insights that hit me hardest — because they match exactly what I see with executives and teams every day. Here’s what you’ll learn: 🔸80% of AI transformation is just… transformation. The same human issues, politics, and leadership gaps — simply amplified by AI. 🔸AI is a megaphone. Healthy cultures accelerate. Broken workflows break faster. Rebecca’s data makes this painfully clear. 🔸Leadership behavior is the biggest adoption driver. If leaders don’t use AI themselves, their teams won’t either. 🔸100 practical ideas from 100+ leaders. Not hype — real moves happening right now. This is one of the most grounded and useful reports I’ve come across. The AI Transformation 100 releases today and I strongly encourage every business leader to read it. Access the full report and join the conversation: https://lnkd.in/e7YYwBrt And if you want the deeper story, don’t miss my full interview with Rebecca — you’ll want to watch it to the end: https://lnkd.in/egsdhVaA #GleanAmbassador #AITransformation #Leadership #ArtificialIntelligence #DigitalTransformation #ChangeManagement #FutureOfWork #BusinessStrategy #Innovation

  • View profile for Andy Jassy
    Andy Jassy Andy Jassy is an Influencer
    1,024,156 followers

    Every cloud provider faces the same AI infrastructure challenge: chips need to be positioned close together to exchange data quickly, but they generate intense heat, creating unprecedented cooling demands. We needed a strategic solution that allowed us to use our existing air-cooled data centers to do liquid cooling without waiting for new construction. And it needed to be rapidly deployed so we could bring customers these powerful AI capabilities while we transition towards facility-level liquid cooling. Think of a home where only one sunny room needs AC, while the rest stays naturally cool – that’s what we wanted to achieve, allowing us to efficiently land both liquid and air-cooled racks in the same facilities with complete flexibility. The available options weren't great. Either we could wait to build specialized liquid-cooled facilities or adopt off-the-shelf solutions that didn't scale or meet our unique needs. Neither worked for our customers, so we did what we often do at Amazon… we invented our own solution. Our teams designed and delivered our In-Row Heat Exchanger (IRHX), which uses a direct-to-chip approach with a "cold plate" on the chips. The liquid runs through this sealed plate in a closed loop, continuously removing heat without increasing water use. This enables us to support traditional workloads and demanding AI applications in the same facilities. By 2026, our liquid-cooled capacity will grow to over 20% of our ML capacity, which is at multi-gigawatt scale today. While liquid cooling technology itself isn't unique, our approach was. Creating something this effective that could be deployed across our 120 Availability Zones in 38 Regions was significant. Because this solution didn't exist in the market, we developed a system that enables greater liquid cooling capacity with a smaller physical footprint, while maintaining flexibility and efficiency. Our IRHX can support a wide range of racks requiring liquid cooling, uses 9% less water than fully-air cooled sites, and offers a 20% improvement in power efficiency compared to off-the-shelf solutions. And because we invented it in-house, we can deploy it within months in any of our data centers, creating a flexible foundation to serve our customers for decades to come. Reimagining and innovating at scale has been something Amazon has done for a long time and one of the reasons we’ve been the leader in technology infrastructure and data center invention, sustainability, and resilience. We're not done… there's still so much more to invent for customers.

  • View profile for Marc Benioff
    Marc Benioff Marc Benioff is an Influencer
    244,179 followers

    The Agentic Enterprise is driving profound change across every industry, but nowhere are the stakes higher than in healthcare. There is an incredible opportunity to elevate the work of healthcare professionals and deliver stronger care for patients around the world. In an essay for TIME, Murali Doraiswamy, professor of medicine at Duke University, and I discuss how AI is revolutionizing medicine, including: • Flagging subtle abnormalities in scans and slides that a human eye might miss. • Speeding up the discovery of drugs and drug targets. • Providing patients faster and more personalized support, from scheduling to flagging side effects But we’ve also seen that over-reliance on AI can lead to “deskilling” — in which medical professionals become less effective. That underscores the importance of approaches that keep humans at the center, such as the Intelligent Choice Architecture (ICA), where AI systems don’t make decisions but nudge providers to take a second look at results, weigh alternatives, and stay actively engaged in the process. The future of work is humans and AI agents working together. If we commit to designing systems that sharpen our abilities, we can combine the promise of AI with the critical thinking, compassion, and real-world judgment that only humans bring. https://lnkd.in/gqkTUfb6

  • View profile for Martyn Redstone

    Head of Responsible AI & Industry Engagement @ Warden AI | Ethical AI • AI Bias Audit • AI Policy • Workforce AI Literacy | UK • Europe • Middle East • Asia • ANZ • USA

    21,246 followers

    Three AI recruiters look at the same 109 CVs. They agree only 14% of the time. That’s not the start of a joke. And that's not efficiency. That’s what I call 'Rank Roulette'. When I tested ChatGPT, Gemini and Grok against the same job spec and anonymised CV set, here’s what happened: • 14% overlap in shortlists → Four times out of five, the models disagreed. • ±2.5 places volatility → Yesterday’s #2 became today’s #5. • 55% of CVs never surfaced → Candidates vanished with no audit trail. • 96% recycled rationales → Fluent, but shallow logic. We’re told by vendors and in-house 'tinkerers' that LLMs can “shortlist in seconds”. The truth: they behave more like over-confident interns - smooth on the surface, but shockingly inconsistent. And the worst part? It’s not even random. In a follow-up piece, I explored why this happens: a technical quirk called batch non-determinism. In plain English: your candidate’s fate changes depending on what else the server was processing at that moment. Until volatility is tamed, hands-off AI screening with LLMs is more than risky. It’s completely unexplainable, indefensible and a governance nightmare. Go to the comments for 👉 Full research 👉 Follow-up on why AI recruiters play favourites

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    78,390 followers

    This week MIT dropped a stat engineered to go viral: 95% of enterprise GenAI pilots are failing. Markets, predictably, had a minor existential crisis. Pundits whispered the B-word (“bubble”), traders rotated into defensive stocks, and your colleague forwarded you a link with “is AI overhyped???” in the subject line. Let’s be clear: the 95% failure rate isn’t a caution against AI. It’s a mirror held up to how deeply ossified enterprises are. Two truths can coexist: (1) The tech is very real. (2) Most companies are hilariously bad at deploying it. If you’re a startup, AI feels like a superpower. No legacy systems. No 17-step approval chains. No legal team asking whether ChatGPT has been “SOC2-audited.” You ship. You iterate. You win. If you’re an enterprise, your org chart looks like a game of Twister and your workflows were last updated when Friendswas still airing. You don’t need a better model - you need a cultural lobotomy. This isn’t an “AI bubble” popping. It’s the adoption lag every platform shift goes through. - Cloud in the 2010s: Endless proofs of concept before actual transformation. - Mobile in the 2000s: Enterprises thought an iPhone app was strategy. Spoiler: it wasn’t. - Internet in the 90s: Half of Fortune 500 CEOs declared “this is just a fad.” Some of those companies no longer exist. History rhymes. The lag isn’t a bug; it’s the default setting. Buried beneath the viral 95% headline are 3 lessons enterprises can actually use: ▪️ Back-office > front-office. The biggest ROI comes from back-office automation - finance ops, procurement, claims processing - yet over half of AI dollars go into sales and marketing. The treasure’s just buried in a different part of the org chart. ▪️Buy > build. Success rates hit ~67% when companies buy or partner with vendors. DIY attempts succeed a third as often. Unless it’s literally your full-time job to stay current on model architecture, you’ll fall behind. Your engineers don’t need to reinvent an LLM-powered wheel; they need to build where you’re actually differentiated. ▪️Integration > innovation. Pilots flop not because AI “doesn’t work,” but because enterprises don’t know how to weave it into workflows. The “learning gap” is the real killer. Spend as much energy on change management, process design, and user training as you do on the tool itself. Without redesigning processes, “AI adoption” is just a Peloton bought in January and used as a coat rack by March. You didn’t fail at fitness; you failed at follow-through. In five years, GenAI will be as invisible - and indispensable - as cloud is today. The difference between the winners and the laggards won’t be access to models, but the courage to rip up processes and rebuild them. The “95% failure” stat doesn’t mean AI is snake oil. It means enterprises are in Year 1 of a 10-year adoption curve. The market just confused growing pains for terminal illness.

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (92,000+ subscribers), Mother of 3

    128,173 followers

    🚨 BREAKING: An extremely important lawsuit in the intersection of PRIVACY and AI was filed against Otter over its AI meeting assistant's lack of CONSENT from meeting participants. If you use meeting assistants, read this: Otter, the AI company being sued, offers an AI-powered service that, like many in this business niche, can transcribe and record the content of private conversations between its users and meeting participants (who are often NOT users and do not know that they are being recorded). Various privacy laws in the U.S. and beyond require that, in such cases, consent from meeting participants is obtained. The lawsuit specifically mentions: - The Electronic Communications Privacy Act; - The Computer Fraud and Abuse Act; - The California Invasion of Privacy Act; - California’s Comprehensive Computer Data and Fraud Access Act; - The California common law torts of intrusion upon seclusion and conversion; - The California Unfair Competition Law; As more and more people use AI agents, AI meeting assistants, and all sorts of AI-powered tools to "improve productivity," privacy aspects are often forgotten (in yet another manifestation of AI exceptionalism). In this case, according to the lawsuit, the company has explicitly stated that it trains its AI models on recordings and transcriptions made using its meeting assistant. The main allegation is that Otter obtains consent only from its account holders but not from other meeting participants. It asks users to make sure other participants consent, shifting the privacy responsibility. As many of you know, this practice is common, and various AI companies shift the privacy responsibility to users, who often ignore (or don't know) what national and state laws actually require. So if you use meeting assistants, you should know that it's UNETHICAL and in many places also ILLEGAL to record or transcribe meeting participants without obtaining their consent. Additionally, it's important to have in mind that AI companies might use this data (which often contains personal information) to train AI, and there could be leaks and other privacy risks involved. - 👉 Link to the lawsuit below. 👉 Never miss my curations and analyses on AI's legal and ethical challenges: join my newsletter's 74,000+ subscribers. 👉 To learn more about the intersection of privacy and AI (and many other topics), join the 24th cohort of my AI Governance Training in October.

  • View profile for Alex Wang
    Alex Wang Alex Wang is an Influencer

    Learn AI Together - I share my learning journey into AI & Data Science here, 90% buzzword-free. Follow me and let's grow together!

    1,132,397 followers

    By 2030, we’ll see 92 million jobs lost, and 𝟏𝟕𝟎 𝐦𝐢𝐥𝐥𝐢𝐨𝐧 jobs created, according to the latest WEF report. We’re heading toward a global churn of 22% of current jobs by then. And many of the new ones? They’re being shaped and accelerated by AI. Some of the fastest-growing roles globally, directly driven by AI adoption, include: - AI and Machine Learning Specialists - Big Data Analysts - AI-augmented UX Designers - Information Security Analysts - Fintech Engineers - Process Automation Specialists ... Many of these roles barely existed at scale just a few years ago. And they’re not all technical. We’re also seeing roles like prompt engineers, AI ethics leads, and AI product strategists gaining traction across different industries. 𝐎𝐧𝐞 𝐬𝐡𝐢𝐟𝐭 𝐭𝐡𝐚𝐭’𝐬 𝐛𝐞𝐜𝐨𝐦𝐢𝐧𝐠 𝐦𝐨𝐫𝐞 𝐯𝐢𝐬𝐢𝐛𝐥𝐞 𝐧𝐨𝐰 𝐢𝐬 𝐭𝐡𝐞 𝐫𝐢𝐬𝐞 𝐨𝐟 𝐀𝐈 𝐚𝐠𝐞𝐧𝐭𝐬. We’re moving from simple model outputs to systems that can take actions, use tools, and follow goals across multiple steps. That shift is bringing new types of roles with it: • Engineers and researchers building agent frameworks • Product teams defining how agents fit into user journeys • Decision engineers designing shared workflows between humans and machines • Governance and compliance leads ensuring safety and alignment ... And this shift isn’t limited to labs or big tech. Thanks to the growth of 𝐨𝐩𝐞𝐧-𝐬𝐨𝐮𝐫𝐜𝐞 𝐀𝐈 𝐭𝐨𝐨𝐥𝐬, agent development is becoming more accessible. And open source frameworks like LangChain are lowering the barrier for experimentation. 📍We also just open-sourced 𝐆𝐞𝐧𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐎𝐒, a lightweight framework we’ve been using internally to run multi-agent systems. If you’re playing around with agent workflows, feel free to check it out: GitHub: https://bit.ly/4kzE1Mt And if you’re into open source, a ⭐ would mean a lot! __________ For more on AI and Data Science, plz check my previous posts. I share my journey here. Join me and let's grow together. Alex Wang #technology #aiagents #agenticai #generativeai

  • View profile for Terezija Semenski, MSc

    Helping 300,000+ people master AI and Math fundamentals faster | LinkedIn [in]structor 15 courses | Author @ Math Mindset newsletter

    30,591 followers

    I taught myself machine learning > 10 years ago. If I had to start again today, I wouldn’t touch models, LLMs, or agents first, as many AI experts suggest. I'd start with the math and the code. Ugly truth: 90% of people skip the foundations, then wonder why everything feels like magic or falls apart in production. If you want to be different, actually understand ML, not just copy-paste, this is the roadmap I'd follow: Start with fundamentals: Because no matter how fast LLMs or GenAI evolve, your math, code, and logic will keep you relevant. Here's what you should focus on: 📐 1. Linear Algebra Learn these core ideas: Vectors, matrices, tensors Matrix multiplication (dot products, broadcasting) Transpose, inverse, rank, determinants Eigenvalues & eigenvectors (especially for PCA & embeddings) Projections and orthogonality ✅ Use NumPy to implement everything yourself → Practice matrix ops, dot products, and visualizing transformations with Matplotlib 🔁 2. Calculus Focus on: Derivatives & partial derivatives Chain rule (for backpropagation in neural nets) Gradient descent Convex functions, minima/maxima ✅ Use SymPy or JAX to visualize and compute derivatives → Plot functions and their gradients to develop deep intuition 🎲 3. Probability You need a solid grip on: Random variables (discrete & continuous) Conditional probability & Bayes' rule Joint & marginal probability The Chain rule Expectation, variance, entropy Common distributions: Bernoulli, Binomial, Gaussian, Poisson Central limit theorem The law of large numbers ✅ Simulate simple probability experiments in Python with NumPy → E.g. simulate sampling from distributions 📊 4. Statistics These are must-know topics: Descriptive stats: mean, median, mode, standard deviation Hypothesis testing: p-values, confidence intervals, t-tests Correlation vs. causation Sampling, bias, and variance Overfitting/underfitting A/B testing basics ✅ Use Pandas & SciPy to explore real datasets → Calculate descriptive stats, create histograms/box plots, run t-tests 🔧 Essential Python libraries to learn early NumPy – for vectorized math and fast array ops Pandas – for loading, cleaning, and analyzing tabular data Matplotlib / Seaborn – for plotting and visualizing distributions, relationships, and trends SymPy – for symbolic math and calculus SciPy – for stats, optimization, and numerical methods Use Jupyter Notebooks(to combine math, code, & visuals in one place) 📚 Best resources to nail the fundamentals: ✅ Machine Learning Foundations Math series (ML Foundations: Linear Algebra, Calculus, Probability, and Statistics)-series of 4 courses that I've created together with LinkedIn learning ✅ Hands-On ML with TensorFlow & Keras book by Aurélien Géron ✅ The Hundred-page Machine Learning Book by Andriy Burkov If you want to become an actual ML engineer, not just someone who watches and copies demos, start here. ♻️ Repost to help others💚

  • View profile for Felix Haas

    Design at Lovable, Angel Investor

    95,498 followers

    Invisible UX is coming 🔥 And it’s going to change how we design products, forever. For decades, UX design has been about guiding users through an experience. We’ve done that with visible interfaces: Menus. Buttons. Cards. Sliders. We’ve obsessed over layouts, states, and transitions. But with AI, a new kind of interface is emerging: One that’s invisible. One that’s driven by intent, not interaction. Think about it: You used to: → Open Spotify → Scroll through genres → Click into “Focus” → Pick a playlist Now you just say: “Play deep focus music.” No menus. No tapping. No UI. Just intent → output. You used to: → Search on Airbnb → Pick dates, guests, filters → Scroll through 50+ listings Now we’re entering a world where you guide with words: “Find me a cabin near Oslo with a sauna, available next weekend.” So the best UX becomes barely visible. Why does this matter? Because traditional UX gives users options. AI-native UX gives users outcomes. Old UX: “Here are 12 ways to get what you want.” New UX: “Just tell me what you want & we’ll handle the rest.” And this goes way beyond voice or chat. It’s about reducing friction. Designing systems that understand intent. Respond instantly. And get out of the way. The UI isn’t disappearing. It’s mainly dissolving into the background. So what should designers do? Rethink your role. Going forward you’ll not just lay out screens. You’ll design interactions without interfaces. That means: → Understanding how people express goals → Guiding model behavior through prompt architecture → Creating invisible guardrails for trust, speed, and clarity You are basically designing for understanding. The future of UX won’t be seen. It will be felt. Welcome to the age of invisible UX. Ready for it?

  • View profile for Christian Klein
    Christian Klein Christian Klein is an Influencer

    CEO of SAP SE

    302,449 followers

    One topic I often get asked about is the impact of #AI on productivity. While there’s no one-size-fits-all answer to that question, AI without doubt represents the greatest economic opportunity since the rise of the internet and is already a key driver of productivity across businesses in every industry.   At SAP, we’re seeing the huge difference AI makes both for our customers and within our own business. But when it comes to adopting and getting the most out of AI, there are a number of critical success factors. One key aspect is regulation. Regulation is a must, but we also have to be careful not to overregulate at the expense of innovation. The focus should be on regulating the outcome, not the technology.   Another reason companies struggle to realize the value of AI in their organizations is the data quality itself. High-quality data is the foundation for trusted AI, but companies often face the challenge of large data silos and how to bring together unstructured and structured data. In the age of AI, more than ever, having one common semantical data layer for your business data is key to understanding a business holistically, as well as the external trends that impact it. The third key piece is knowledge and change management. The value of AI is clear, but many companies are unsure about how they can best extract it and whether they have the skills they need to realize its potential for their business. Successfully applying AI is much more than simply implementing a piece of technology. The importance of upskilling, leadership, communication, and culture cannot be underestimated in any business transformation journey. In an increasingly unpredictable world, AI plays an important role in helping businesses thrive. Today, it's already driving agility, resilience, and productivity – and by focusing on the right regulatory frameworks, data quality, and change management, these benefits are set to increase even further. 

Explore categories