Generative AI Use Cases

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,325,053 followers

    Large language models (LLMs) are typically optimized to answer peoples’ questions. But there is a trend toward models also being optimized to fit into agentic workflows. This will give a huge boost to agentic performance! Following ChatGPT’s breakaway success at answering questions, a lot of LLM development focused on providing a good consumer experience. So LLMs were tuned to answer questions (“Why did Shakespeare write Macbeth?”) or follow human-provided instructions (“Explain why Shakespeare wrote Macbeth”). A large fraction of the datasets for instruction tuning guide models to provide more helpful responses to human-written questions and instructions of the sort one might ask a consumer-facing LLM like those offered by the web interfaces of ChatGPT, Claude, or Gemini. But agentic workloads call on different behaviors. Rather than directly generating responses for consumers, AI software may use a model in part of an iterative workflow to reflect on its own output, use tools, write plans, and collaborate in a multi-agent setting. Major model makers are increasingly optimizing models to be used in AI agents as well. Take tool use (or function calling). If an LLM is asked about the current weather, it won’t be able to derive the information needed from its training data. Instead, it might generate a request for an API call to get that information. Even before GPT-4 natively supported function calls, application developers were already using LLMs to generate function calls, but by writing more complex prompts (such as variations of ReAct prompts) that tell the LLM what functions are available and then have the LLM generate a string that a separate software routine parses (perhaps with regular expressions) to figure out if it wants to call a function. Generating such calls became much more reliable after GPT-4 and then many other models natively supported function calling. Today, LLMs can decide to call functions to search for information for retrieval augmented generation (RAG), execute code, send emails, place orders online, and much more. Recently, Anthropic released a version of its model that is capable of computer use, using mouse-clicks and keystrokes to operate a computer (usually a virtual machine). I’ve enjoyed playing with the demo. While other teams have been prompting LLMs to use computers to build a new generation of RPA (robotic process automation) applications, native support for computer use by a major LLM provider is a great step forward. This will help many developers! [Reached length limit; full text: https://lnkd.in/gHmiM3Tx ]

  • View profile for Jason Saltzman
    Jason Saltzman Jason Saltzman is an Influencer

    Head of Insights @ CB Insights | Former Professional 🚴♂️

    30,617 followers

    Applied Intuition is paving the way to self-driving success. Applied Intuition's $600M Series F ($15B valuation) highlights the autonomous vehicle market's AI-enabled resurgence and how infrastructure companies are capturing outsized value as the industry matures. The broader autonomous driving space saw equity funding 3x last year, driven by massive rounds to Waymo ($5.6B Series C) and Wayve ($1.1B Series C). Applied Intuition's raise is the latest signal of a broader market revival, where generative AI is accelerating the timeline for full autonomous driving by removing remaining hurdles around cost, explainability, and vehicle-passenger communication. While everyone debates which manufacturer(s) will "win" autonomous driving, Applied Intuition’s "picks and shovels" strategy is paying off. They are quickly becoming the foundational simulation and validation software that everyone needs. “Everyone” includes 18 of the top 20 automotive OEMs as customers and strategic partnerships with Audi, TRATON Group, Isuzu Motors, and OpenAI. OEMs are realizing they need specialized software partners, not just in-house development. It's not just about building the cars, it's about building the tools that build the cars. Broader AV market dynamics particularly favor companies like Applied Intuition. Major OEMs like GM and Hyundai injected $1.4B into their self-driving units last year but are facing safety issues and commercialization delays. This creates opportunities for specialized software providers to offer cost-effective alternatives to in-house development. Like with many emerging tech markets, the biggest winners in AV may be the ones building the critical infrastructure that makes the end product possible, rather than the end product itself. Applied Intuition is a clear leader in the resurgent AV space, with their multi-sector approach across automotive, trucking, defense, and industrial applications giving them a sustainable competitive moat. The latest funding round positions Applied Intuition to capitalize on the autonomous vehicle market's second wave, where established software platforms become increasingly valuable as the industry moves from experimentation to commercial deployment.

  • View profile for Blake Oliver
    Blake Oliver Blake Oliver is an Influencer

    Host of The Accounting Podcast, The Most Popular Podcast for Accountants | Creator of Earmark Where Accountants Earn Free CPE Anytime, Anywhere

    66,273 followers

    Imagine having an assistant who knows all 665 pages of PCAOB guidelines by heart. That's exactly what Josh Poresky has achieved with RAG (Retrieval Augmented Generation)—and it's set to transform how we approach SOX audits. 🤖 In this bonus episode of The Accounting Podcast, Josh explains how RAG allows an AI like ChatGPT to focus on a specific set of documents. By feeding the 665-page PCAOB audit guidelines into a RAG program, he's created a tool that can answer complex audit questions in seconds. Now, you can pose specific scenarios to the bot—like whether a certain password control is an issue—and receive clear, concise answers with explanations. No more sifting through hundreds of pages or waiting for a manager's input! Josh demoed the app he's built with RAG. We explored how it handles a case where management didn't sign off on a user access review. The bot flagged it as a PCAOB issue but also considered whether a compensating control could mitigate it. The best part? RAG prevents the bot from "hallucinating" information that isn't in the documents. When asked about Tom Brady, it simply stated that he wasn't mentioned in the provided context. I'm excited to see how RAG and similar AI techniques can streamline complex accounting processes. It's a great example of how AI will help auditors work smarter, not harder. What other applications do you see for this technology in accounting and finance? Let me know in the comments 👇 #AI #Accounting #SOX #Audit

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    693,431 followers

    Do you rely on one large generalist model to power multiple use cases, or do you build a suite of specialized models fine-tuned for specific tasks? Large Language Models (LLMs) act as the generalists. One model can handle many functions across financial services: -Fraud Detection -Automated Investing -Customer Service Chatbots -Personalized Banking -Consumer Loan Underwriting -This flexibility makes them ideal for exploration, rapid prototyping, and -scenarios where breadth of understanding matters more than hyper-optimization. Small Language Models (SLMs) act as the specialists. Each is optimized for a single task, such as: -Loan Qualification -Consumer Loan Underwriting -Fraud Detection -The benefit? Efficiency, accuracy, and cost control. By narrowing the scope, SLMs can outperform generalist models in production environments where precision is non-negotiable. The Hybrid Future The reality isn’t LLM or SLM — it’s both. LLMs will serve as the reasoning engines, orchestrating complex workflows and bridging gaps across domains. SLMs will deliver deep expertise in critical tasks, ensuring enterprise-grade performance. This hybrid approach mirrors how organizations operate: broad leadership supported by domain experts. As AI adoption accelerates, companies that can strike the right balance between generalist adaptability and specialist efficiency will set the standard for the next wave of digital transformation. Question for you: In your industry, are you leaning more toward the power of generalist LLMs, the precision of SLMs, or a blended strategy?

  • View profile for Jim Rowan
    Jim Rowan Jim Rowan is an Influencer

    US Head of AI at Deloitte

    29,968 followers

    The automotive industry is undergoing significant transformation, driven by autonomous and software-defined vehicles. What could slow down progress? The constraints of legacy data and the need to generate and validate massive amounts of new #data to power these new technologies.   Enter Generative AI. In this Automotive News article, Richard Whitford, Deloitte’s Automotive AI and Data practice leader, shares why GenAI isn’t just a solution, “it’s a catalyst for change.” Here are 4 ways GenAI can help transform the automotive industry: 1. Data simulation: Creating synthetic data that mimics real world data helps to accelerate the training of AI models and speeds up the development of new automotive technologies. 2. Improving code quality and resource efficiency. Generative AI can standardize and streamline software development processes. 3. Simulation and quality control. Creating realistic simulations and synthetic data for quality control can significantly reduce development cycles and improve product quality, leading to safer vehicles. 4. Enhancing customer experiences. GenAI can help create a unified journey across all touchpoints and enable more personalized in-car experiences. Take a deeper dive here https://bit.ly/4fOmUpc to learn how automakers can prepare for a GenAI-fueled future.  

  • View profile for Sarthak Rastogi
    Sarthak Rastogi Sarthak Rastogi is an Influencer

    AI engineer | Posts on agents + advanced RAG | Experienced in LLM research, ML engineering, Software Engineering

    22,073 followers

    This is how Aimpoint Digital built an AI agent system to generate personalised travel itineraries in under 30 seconds, saving hours of planning time. - Aimpoint Digital's system uses a multi-RAG architecture -- it has three parallel RAG systems to gather info quickly. Each system focuses on different aspects such as places, restaurants, and events to give detailed itinerary options. - They utilised Databricks' Vector Search service to help the system scale. The architecture currently supports data for 100s of cities, with an existing DB of ~500 restaurants in Paris, ready to expand. - To stay up-to-date, the system adds Delta tables with Change Data Feed. This updates the vector search indices automatically whenever there's a change in source data, keeping recommendations fresh and accurate. - The AI agent system runs on standalone Databricks Vector Search Endpoints for querying. This setup has provisioned throughput endpoints to serve LLM requests. - Evaluation metrics like precision, recall, and NDCG quantify the quality of data retrieval. The system also uses an LLM-as-judge to check output quality from aspects like professionalism, based on examples. Link to the article: https://lnkd.in/gFGvyTT9 #AI #RAG #GenAI

  • View profile for Roman Frolov
    Roman Frolov Roman Frolov is an Influencer

    CEO at Eternal

    52,531 followers

    Can large language models be used in biotech? The short answer is yes. While LLMs are often associated with chatbots, their capabilities extend beyond that. In biotech, much of the data comes in the form of sequences – like nucleotides in DNA, or amino acids in proteins. Similar to sentences in natural language, these biological sequences have unique semantic meanings based on the arrangement of their components. When input data is fed into an LLM, a transformer converts these sequences into contextual vectors using its attention mechanism. This process allows the model to understand the context and relationships within the data, enabling it to predict subsequent elements. One such use case is prediction of neoantigens that enable targeting tumor cells in personalized cancer immunotherapies. Neoantigens are tumor-specific mutated peptides presented on the surface of tumor cells because they bind to human leukocyte antigen (HLA) molecules. LLMs can predict this binding affinity. This allows the development of personalized therapies that use the patient's own immune system to kill tumor cells without damaging healthy tissues.

  • View profile for Vignesh Kumar
    Vignesh Kumar Vignesh Kumar is an Influencer

    AI Product & Engineering | Start-up Mentor & Advisor | TEDx & Keynote Speaker | LinkedIn Top Voice ’24 | Building AI Community Pair.AI | Director - Orange Business, Cisco, VMware | Cloud - SaaS & IaaS | kumarvignesh.com

    19,577 followers

    🚀 AI is now my go-to travel planner I travel extensively—both for work and personal vacations. And if there’s one thing I’ve learned over the years, it’s that planning a trip can be exhausting. From picking a destination to figuring out flights, hotels, visa requirements, and local transport—travel research used to take hours, sometimes days. Now? AI does most of the heavy lifting for me. It’s like having a personal travel assistant that helps with every step: ✅ Shortlisting destinations – I just describe what I’m looking for, and AI suggests options based on preferences, budget, and time of year. ✅ Weighing pros and cons – Instead of going through multiple websites, I can ask AI to compare weather, cost, activities, and travel restrictions. ✅ Planning the details – Flights, hotels, visa requirements, local transport—AI pulls up all the relevant information instantly. ✅ Creating itineraries – It suggests the best places to visit, organizes the days efficiently, and even provides tips on local customs, packing lists, and food recommendations. Memory in AI tools like ChatGPT is making this even better. Instead of starting from scratch every time, AI remembers my family’s preferences. 💡 It knows we love cultural experiences over adventure sports. 💡 It remembers we prefer apartment stays over hotels. 💡 It factors in that we like planning buffer time in itineraries. So now, when I ask for travel suggestions, I don’t get generic recommendations. I get suggestions tailored to my family's travel style. And Google is taking AI-powered travel a step further: ◾ Google Maps now automatically organizes your trip research. It identifies places you’ve searched for and groups them into lists—so you don’t have to manually save them. ◾ Gemini now lets users create their own AI travel expert for free. Want a custom trip planner? You can build an AI assistant that understands your preferences, suggests destinations, helps with packing, and even gives real-time tips. ◾ Google Lens is adding more languages for instant translation of signs, menus, and more—making international travel even smoother. New languages include Hindi, Indonesian, Japanese, Korean, Portuguese, and Spanish. For travelers like me, this means less stress and more seamless experiences—no more juggling multiple apps and websites. For companies in the travel space, AI is reshaping how people plan and book trips. Businesses that integrate AI-powered recommendations, itineraries, and real-time assistance will have a clear edge in delivering personalized and frictionless experiences. The future of travel is AI-powered, and it’s already here. I write about #artificialintelligence | #technology | #startups | #mentoring | #leadership | #financialindependence   PS: All views are personal Vignesh Kumar

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    50,000 followers

    There’s been a lot of discussion about how Large Language Models (LLMs) power customer-facing features like chatbots. But their impact goes beyond that—LLMs can also enhance the backend of machine learning systems in significant ways. In this tech blog, Coupang’s machine learning engineers share how the team leverages LLMs to advance existing ML products. They first categorized Coupang’s ML models into three key areas: recommendation models that personalize shopping experiences and optimize recommendation surfaces, content understanding models that enhance product, customer, and merchant representation to improve shopping interactions, and forecasting models that support pricing, logistics, and delivery operations. With these existing ML models in place, the team integrates LLMs and multimodal models to develop Foundation Models, which can handle multiple tasks rather than being trained for specific use cases. These models improve customer experience in several ways. Vision-language models enhance product embeddings by jointly modeling image and text data; weak labels generated by LLMs serve as weak supervision signals to train other models. Additionally, LLMs also enable a deeper understanding of product data, including titles, descriptions, reviews, and seller information, resulting in a single LLM-powered categorizer that classifies all product categories with greater precision. The blog also dives into best practices for integrating LLMs, covering technical challenges, development patterns, and optimization strategies. For those looking to elevate ML performance with LLMs, this serves as a valuable reference. #MachineLearning #DataScience #LLM #LargeLanguageModel #AI #SnacksWeeklyonDataScience – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gvaUuF4G

  • View profile for Himanshu J.

    Building Aligned, Safe and Secure AI

    27,129 followers

    Are we overusing LLMs in Agentic AI? A recent paper from NVIDIA Research and Georgia Institute of Technology suggests that Small Language Models (SLMs) might offer a more effective, cost-efficient, and responsible alternative to Large Language Models (LLMs) in the realm of agentic AI. The key argument presented is that SLMs, due to their power, suitability, and cost-effectiveness, are better suited for many agentic tasks, particularly those that are repetitive, scoped, and tool-oriented rather than open-ended conversations. These tasks include parsing forms, calling APIs, data extraction, and generating structured outputs. SLMs offer several advantages:- - Cost-effectiveness and energy efficiency. - Improved latency and deployment at the edge. - Simplified fine-tuning and increased flexibility. - Mitigation of hallucination risks through strict alignment. In various case studies like MetaGPT, Open Operator, and Cradle, it has been demonstrated that replacing 40–70% of LLM calls with SLMs can maintain accuracy while promoting smarter modular design. A notable suggestion is to move away from viewing agents as singular entities powered by a single LLM. Instead, the proposal is to construct systems using multiple SLMs akin to Lego blocks, resorting to LLMs selectively as necessary. Adopting an SLM-first strategy could lead to:- - More cost-effective AI infrastructure. - Safer and more manageable agents. - Increased accessibility to agentic technology. To delve deeper into this concept, you can read the paper here:-https://lnkd.in/dyMHQWAq What are your thoughts on embracing an SLM-first paradigm in the realm of Agentic AI? #AgenticAI #SLMs #LLMs #AIInfrastructure #ModularAI #EdgeAI #NVIDIA #LanguageModels #SustainableAI #AIForEveryone

Explore categories