Engineering Process Improvement Projects

Explore top LinkedIn content from expert professionals.

  • View profile for Murray Robinson

    Digital Leader / Agile AI Vibe Coding Product Founder 🚀 to the Moon and all that bollocks🚀

    13,204 followers

    As a client project manager, I consistently found that offshore software development teams from major providers like Infosys, Accenture, IBM, and others delivered software that failed 1/3rd of our UAT tests after the provider's independent dedicated QA teams passed it. And when we got a fix back, it failed at the same rate, meaning some features cycled through Dev/QA/UAT ten times before they worked. I got to know some of the onshore technical leaders from these companies well enough for them to tell me confidentially that we were getting such poor quality because the offshore teams were full of junior developers who didn't know what they were doing and didn't use any modern software engineering practices like Test Driven Development. And their dedicated QA teams couldn't prevent these quality issues because they were full of junior testers who didn't know what they were doing, didn't automate tests and were ordered to test and pass everything quickly to avoid falling behind schedule. So, poor quality development and QA practices were built into the system development process, and independent QA teams didn't fix it. Independent dedicated QA teams are an outdated and costly approach to quality. It's like a car factory that consistently produces defect-ridden vehicles only to disassemble and fix them later. Instead of testing and fixing features at the end, we should build quality into the process from the start. Modern engineering teams do this by working in cross-functional teams. Teams that use test-driven development approaches to define testable requirements and continuously review, test, and integrate their work. This allows them to catch and address issues early, resulting in faster, more efficient, and higher-quality development. In modern engineering teams, QA specialists are quality champions. Their expertise strengthens the team’s ability to build robust systems, ensuring quality is integral to how the product is built from the outset. The old model, where testing is done after development, belongs in the past. Today, quality is everyone’s responsibility—not through role dilution but through shared accountability, collaboration, and modern engineering practices.

  • View profile for Andrew Boyagi
    Andrew Boyagi Andrew Boyagi is an Influencer

    Customer CTO @ Atlassian

    10,511 followers

    Engineers aren't babies. Give them the time and they'll improve their own productivity. All this focus on improving the experience and productivity FOR engineers... they have some responsibility here too. Platform teams are important, but they're not a complete solution. Platform teams will only tackle the problems faced by the majority of engineers/teams. What about all the problems that only affect a subset of teams? Platform teams are busy, the reality is they will never get around to addressing the "smaller" problems. Nor should they. Engineers are more than capable of solving things for themselves. Think about platform teams as a top-down approach to improving DevEx, giving engineers time to improve things for themselves as a bottom-up. If you want meaningful improvements to experience and productivity, you need both top-down and bottom-up. An example close to home - Engineering teams at Atlassian use 10% of their time to improve things for themselves. This is separate from the efforts of platform and experience teams. You don't need to give engineers 10% time (although it does work well), but you should have some strategy for empowering engineers to improve their own productivity. #DeveloperExperience #DevOps #PlatformEngineering

  • View profile for Addy Osmani

    Director, Google Cloud AI. Best-selling Author. Speaker. AI, DX, UX. I want to see you win.

    238,691 followers

    "Focus on outcomes vs. outputs. Features don't automatically create value" The pivot from an emphasis on outputs to outcomes can be a critical paradigm shift for teams intent on building solutions that provide real value and enhance customer satisfaction. I just wrote about this topic in my new article on LeadDev: https://lnkd.in/gYqwH32S An output signifies the result of an activity (for example, launching a new feature), whereas an outcome is a change in customer behavior that drives business results (e.g. user-happiness improved, thanks to this over time we may see an impact to business metrics like sales). Real-world Scenario Imagine you're Spotify. A newly launched feature (an output), enables listeners to swiftly save their favorite songs (an outcome). The outcome of this results in heightened user satisfaction with usability, an increase in subscription numbers, and a subsequent rise in sales over the following six months (impact). Guiding Teams: Outputs, Impacts, and Outcomes To optimize efficacy, it's imperative to understand the distinctions between guidance based on outputs, impacts, and outcomes: Guidance based on Outputs: This entails asking teams to develop specific new features, products, or enhancements. However, this approach does not inherently ensure the provision of value to the users or the business. Guidance based on Impact: This encourages the team to deliver overarching value, such as revenue enhancement or cost reduction. Although essential, this may not offer explicit direction for the team or assist in the creation of user-oriented solutions. Guidance based on Outcomes: This involves asking teams to create specific customer behavior changes that drive business results. This allows teams to find the right solution and keeps them focused on delivering value to the users and the business. Employing an Outcome-Oriented Approach The value of an outcome-oriented approach is evident when uncertainties abound in a new initiative, product, or feature. Such uncertainties are a common occurrence in software engineering, and outcomes offer a means to define goals that encourage teams to experiment with various solutions until the right one is identified. The Limitations of an Outcome-Oriented Approach In instances where a solution is almost guaranteed to work, such as routine maintenance or bug fixes, the outcome-oriented approach may not be the best fit. For these scenarios, an output-focused plan is more fitting. Occasionally, I'm asked about where elements with no clear link to outcome fit in (like technical debt, code health). This often boils down to perspective (e.g., neglecting these could increase the long-term cost of outcome execution, framing in terms of the value back to the business, but can still have their place). Concentrate on what truly counts. Illustration credit: Workpath: https://lnkd.in/gFRcs-v7 #softwareengineering #motivation #productivity #lifeatgoogle

  • View profile for Talila Millman
    Talila Millman Talila Millman is an Influencer

    Chief Technology Officer | Board Director | Advisor | Speaker | Author | Innovation | Strategy | Change Management

    9,872 followers

    As an advisor to tech scaleups, and a former CTO and SVP of Engineering,  I've often encountered a familiar CEO complaint: "Our engineering team is too slow!" However, focusing solely on increasing individual productivity is rarely the solution. Sometimes the answer is changing the organizational structure. 🔍 The Issue with Flat Structures: Time to market was a major problem in a scale-up I advised, even though they had a flat structure where 40+ engineers reported directly to the VP of engineering and all of them shared equal accountability to the delivery of the software. 🚧 The Consequences: Major overcommitment.  People raised their hands to take on work even if the group was super extended. There was nobody that fully understood the team’s capacity vs the actual workload they took on. This approach led to a lack of predictability, chronic delays, unhappy customers, and ultimately, a tarnished reputation. 🛠️ The Solution: Transitioning to a hierarchical structure with focused teams and accountable experienced leaders was the game-changer. This shift brought in clarity, accountability, and much-needed structure. 📈 The Results: Predictable schedules, improved customer satisfaction, and a thriving engineering culture. ✅ Takeaways for Your Organization: Examine your organization with critical eyes: Is your ownership and accountability structure clear? Are your teams sized and focused appropriately? Do your leaders have the authority to deliver effectively? For more on the case study and about building a sustainable, efficient, and customer-centric engineering team in the blog post. 💭 I'm curious to hear your thoughts: Have you faced similar challenges? How did you address them? Let's share insights and grow together! #EngineeringManagement #Leadership #Productivity  _______________ ➡️ I am Talila Millman, a fractional CTO,  a management advisor, and a leadership coach. I help CEOs and their C-suite grow profit and scale through optimal Product portfolio and an operating system for Product Management and Engineering excellence.  📘 My book The TRIUMPH Framework: 7 Steps to Leading Organizational Transformation will be published in Spring 2024 https://lnkd.in/eVYGkz-e

  • View profile for Shubham Singh

    SDE 3-ML | Flipkart

    3,046 followers

    A junior reached out to me last week. One of our APIs was collapsing under 150 requests per second. Yes — only 150. He had tried everything: * Added an in-memory cache * Scaled the K8s pods * Increased CPU and memory Nothing worked. The API still couldn’t scale beyond 150 RPS. Latency? Upwards of 1 minute. 🤯 Brain = Blown. So I rolled up my sleeves and started digging; studied the code, the query patterns, and the call graphs. Turns out, the problem wasn’t hardware. It was design. It was a bulk API processing 70 requests per call. For every request: 1. Making multiple synchronous downstream calls 2. Hitting the DB repeatedly for the same data for every request 3. Using local caches (different for each of 15 pods!) So instead of adding more pods, we redesigned the flow: 1. Reduced 350 DB calls → 5 DB calls 2. Built a common context object shared across all requests 3. Shifted reads to dedicated read replicas 4. Moved from in-memory to Redis cache (shared across pods) Results: 1. 20× higher throughput — 3K QPS 2. 60× lower latency (~60s → 0.8s) 3. 50% lower infra cost (fewer pods, better design) The insight? 1. Most scalability issues aren’t infrastructure limits; they’re architectural inefficiencies disguised as capacity problems. 2. Scaling isn’t about throwing hardware at the problem. It’s about tightening data paths, minimizing redundancy, and respecting latency budgets. Before you spin up the next node, ask yourself: Is my architecture optimized enough to earn that node?

  • View profile for Moe Roghabadi

    Global Director, Risk Solutions @ Hatch | PhD in Construction Management

    5,395 followers

    How Does Constructability Concept During Planning Phase Contribute to Cost Certainty in Infrastructure Projects? Mega infrastructure projects are inherently risky, with many uncertainties involved. Statistics show that, on average, real infrastructure projects deliver 67% over budget and 44% behind schedule. Therefore, proactively predicting future risks/changes that may be encountered during construction, operation, and maintenance phases is critical for reducing the reported overruns and achieving cost certainty goals. Constructability is a commonly used framework for owners to proactively identify the sources of future changes, preventing unnecessary costly expenses during and post-construction phases. Research shows that effective constructability reviews during the design development phase can resolve up to 75% of field problems and mistakes. According to a case example provided by CII (2009), the life cycle costs of an Oil Production Facility in the US were reduced from $3.8 billion to $1.4 billion, mainly due to the upfront implementation of a constructability program during the design development phase [1]. Research conducted by [2] highlighted the importance of focusing on project life cycle costs during constructability review rather than just design and construction costs, as owner organizations have been suffering from the costs of reworks during the O&M phases of their projects. The attached figure shows that these two phases include around 50% to 80% of the total life cycle costs. Despite the high cost, the risks associated with these phases (operability & maintainability) are often underestimated during constructability reviews, causing significant changes and overruns during these phases. In my recent post, I emphasized the significance of Early Contractor Involvement (ECI) through collaborative contracts and incentivization strategies for managing life cycle risks early on. Despite many public owners in North America adopting and accelerating these contracting models, there remains skepticism in the market regarding the value and cost-effectiveness of ECI during the planning phase. The above research shows that the cost of early contractor involvement during planning is minor (if implemented effectively) compared to the cost of resolving field problems and mistakes. Addressing 75% of field problems and mistakes early provides several times the cost savings compared to dealing with them later. In your view, what are the benefits and challenges of implementing constructability reviews during the planning phase? Your thoughts are appreciated. Source: [1] https://lnkd.in/giEQnBGp [2] https://lnkd.in/gqDiJiBY #riskmanagement #decisionmaking #value #constructability #costoverrun #costsaving #infrastructure #transit #rial #uncertainty #moeroghabadi Hatch

  • View profile for Akhil Yash Tiwari
    Akhil Yash Tiwari Akhil Yash Tiwari is an Influencer

    Building Product Space | Helping aspiring PMs to break into product roles from any background

    22,510 followers

    Why product roadmaps should be outcome based not feature-driven We do sprints to ship features, and they don’t always work out. Why? Because features alone don’t move the needle -outcomes do. A practice that I usually follow is to ask myself: What problem are we solving, and how will we measure success?” And that’s how we pivot from feature factories to outcome-driven roadmaps with actionable steps to make it stick. 𝗪𝗵𝘆 𝗢𝘂𝘁𝗰𝗼𝗺𝗲𝘀 > 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀 Outcome-based roadmaps focus on measurable results (e.g., “Increase free-to-paid conversion by 15%” vs. “Build a pricing calculator”). This shift: - Aligns teams around business goals, not just deliverables. - Empowers creativity (solve the problem, don’t just check a box). - Reduces waste by killing initiatives that don’t drive impact. But how do you actually make this work? Here’s My Practical Playbook 👇🏻 1️⃣ 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 “𝗪𝗵𝘆” - Define outcomes tied to business goals: Partner with leadership to align on 1-2 KPIs per quarter (e.g., “Reduce churn by 10%”). - Ask this question: “If we deliver X feature, what outcome does it enable?”. If there’s no clear answer, rethink it. 2️⃣ 𝗕𝗿𝗲𝗮𝗸 𝗢𝘂𝘁𝗰𝗼𝗺𝗲𝘀 𝗶𝗻𝘁𝗼 𝗘𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁𝘀 Outcomes are broad—break them into testable hypotheses. - Example: To “Increase user engagement by 20%,” run:   - A/B test push notification timing.   - Pilot a gamified onboarding flow.   - Measure DAU/WAU ratios weekly. 3️⃣ 𝗔𝗱𝗼𝗽𝘁 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 - OKRs: Link Objectives (outcomes) to Key Results (metrics). - Impact Mapping: Visualize how features connect to goals. - RICE Scoring: Prioritize initiatives by Reach, Impact, Confidence, Effort. 4️⃣ 𝗚𝗲𝘁 𝗦𝘁𝗮𝗸𝗲𝗵𝗼𝗹𝗱𝗲𝗿 𝗕𝘂𝘆-𝗜𝗻 - Frame outcomes as ROI: Show how “Reduce support tickets by 25%” cuts costs. - Prototype outcomes first: Share a mock roadmap with leadership, highlighting gaps in current feature-centric plans. 5️⃣ 𝗠𝗲𝗮𝘀𝘂𝗿𝗲, 𝗟𝗲𝗮𝗿𝗻, 𝗜𝘁𝗲𝗿𝗮𝘁𝗲 - Track leading indicators (e.g., user behavior changes) alongside lagging metrics (e.g., revenue). - Celebrate “failures”: Killing a feature that didn’t drive outcomes is a win. 𝟯 𝗧𝗵𝗶𝗻𝗴𝘀 𝘁𝗼 𝗔𝘃𝗼𝗶𝗱 - - Vague outcomes: “Improve UX” → ❌ | “Reduce checkout abandonment by 20%” → ✅. - Overloading the roadmap: Focus on 1-2 outcomes per quarter. - Ignoring feedback loops: Revisit outcomes bi-weekly—adapt as data comes in. This week, try this: Audit your roadmap. For every feature, ask: “What outcome does this serve?” If it’s unclear, reframe it, or cut it. I believe outcome-based roadmaps is a survival tactic. Let’s build products that matter. 👉 How are you bridging the gap between features and impact? Would love to know your process.

  • View profile for Oliver King

    Founder & Investor | AI Operations for Financial Services

    5,053 followers

    The best systems need the least management. Yet we keep adding steps, checkpoints, and approvals. I used to believe great companies were built on comprehensive processes. My first startup had detailed procedures for everything — each sales interaction, support ticket, and feature release followed a precise playbook. As we scaled, our process documentation grew faster than our revenue. Team velocity slowed. Innovation suffered. Talented people spent more time following protocols than solving problems. The turning point came when we rebuilt our approach around outcomes instead of activities: 1️⃣ We replaced activity metrics ("number of calls made") with outcome metrics ("deals progressed") 2️⃣ We stopped documenting how tasks should be done and started defining what success looked like 3️⃣ We built automated guardrails instead of manual checkpoints 4️⃣ We focused quality control on system inputs and outputs, not every step in between The results were transformative. Teams moved faster. Quality improved. People stayed energized. Business process exists to manage risk and ensure quality—both valid concerns. But most companies implement these controls at the tactical level when they belong at the systems level. Think of it like this: You can micromanage a road trip by dictating every turn, or you can set a destination, provide a reliable vehicle with good brakes, and trust the driver to navigate. The difference is critical. Tactical processes control behaviors while systems-level thinking shapes environments. Some practical shifts to consider: 1️⃣ Replace decision chains with clear boundaries and after-action reviews 2️⃣ Substitute detailed instructions with clear success criteria 3️⃣ Trade activity monitoring for outcome measurement 4️⃣ Swap manual checks for automated testing 5️⃣ Replace rigid workflows with principles and guardrails Design systems that make quality inevitable, not processes that make errors impossible. Operational excellence is fundamentally about outcome clarity, not process quantity. #startups #founders #growth #ai

  • View profile for Kuldeep Singh Sidhu
    Kuldeep Singh Sidhu Kuldeep Singh Sidhu is an Influencer

    Senior Data Scientist @ Walmart | BITS Pilani

    13,347 followers

    At Recombee, in collaboration with Czech Technical University in Prague and Charles University, researchers have tackled a core challenge in recommender systems: embedding scalability. Their latest research introduces CompresSAE, an innovative embedding compression method leveraging sparse autoencoders to efficiently manage massive embedding tables. Dense embeddings are powerful, yet as their size increases, so do costs-storage, latency, and computational demands escalate significantly. To address this, CompresSAE projects dense embeddings into high-dimensional sparse spaces. Instead of traditional linear approaches or retraining intensive methods, this model employs a sparsification strategy that maintains representation quality while drastically cutting down memory usage. Under the hood, CompresSAE compresses embeddings by activating only a small subset of dimensions (e.g., 32 out of 4096), effectively capturing the directional information crucial for similarity retrieval tasks. This sparsity not only reduces storage requirements dramatically but also enhances retrieval efficiency through rapid cosine similarity computations. In online experiments with catalogs containing over 100 million items, CompresSAE achieved nearly the same click-through rate (CTR) performance as models eight times larger, outperforming comparable compression methods. Crucially, this approach doesn't require retraining the backbone models, making it highly practical for large-scale deployments. This highlights the future potential of sparse embeddings in enhancing the efficiency and scalability of next-generation recommender systems. 

  • View profile for Wiem Ben Naceur

    Chemical Engineer I Process Engineer I Water Treatment engineer I Utilities Engineer I Safety Engineer

    11,578 followers

    📘 Today’s Learning: Key Engineering Insights from a Process Design Guide As part of my continuous learning in process engineering, I reviewed a detailed Process Engineering Design Guide covering major equipment and system design considerations. Here are some of the strongest takeaways that every process engineer should know: ✅ 1. Line Sizing is More Than Just Velocity Good pipe sizing requires balancing: Pressure drop limits Erosion risk in multiphase flow Pump NPSH Noise and vibration ➡️ Proper line sizing protects both equipment and system stability. ✅ 2. Heat Exchanger Design Requires Smart Choices Key insights: Put high-pressure, corrosive, or fouling fluids on the tube side Apply 10–20% design margin for flexibility Choose TEMA configurations carefully Use downward flow in condensers to avoid slugging ➡️ Every design detail influences safety, performance, and operability. ✅ 3. Separator Design Is Critical for Plant Stability Important lessons: Correct internals selection improves phase separation Residence time affects performance Avoid slugging and entrainment Level control coordination is essential ➡️ A well-designed separator ensures smooth operation across the plant. ✅ 4. Pump Systems: NPSH Determines Success What matters most: Correct suction piping Controlled velocities Cavitation prevention Proper pressure margins ➡️ Most pump failures start at the design stage — not in the field. ✅ 5. Utilities Define Plant Reliability Utilities such as steam, cooling water, and instrument air must follow strict design criteria: Pressure drop limits Velocity control to prevent corrosion Correct sizing of steam headers Quality and reliability for instrument air ➡️ Utilities are the backbone of every process facility. ⭐ Final Thought This guide reinforces one truth: 👉 Process engineering is about understanding the “why,” not only the calculations. Continuous learning is essential to designing safer, more efficient, and more reliable systems. #ProcessEngineering #ChemicalEngineering #EngineeringDesign #OilAndGasIndustry #HeatExchangers #SeparatorDesign #PipingDesign #LineSizing #PlantOperations #ProcessSafety #EngineeringStandards #IndustrialEngineering #ContinuousLearning #EngineeringLife #TechnicalLearning #GraduateEngineer #EngineerInProgress #LinkedInLearning

Explore categories