How to sell (without feeling salesy): First, understand the Ethical Wealth Formula: (Value First × Trust Building) × Authentic Positioning ——————————————————— Frequency of Asks × Pressure Tactics This isn't abstract theory. It's practical math: • Increase the numerator: deliver more value, build more trust, position more authentically • Decrease the denominator: reduce frequency of asks, eliminate pressure tactics • Watch revenue soar while your integrity remains intact Ethical doesn't mean unprofitable. It means sustainable. Principle 1: Value-First Monetization The approach that generates $864,000 monthly without a single "hard sell": • Deliver so much value upfront that buying feels like the obvious next step • Create free content so good people say "If this is free, imagine what's paid" • Solve small problems for free, big transformational problems for a fee Give until it feels slightly uncomfortable. Then give a little more. Principle 2: Trust Through Consistency I've never missed weekly content in 3 years, through vacations, illnesses, market crashes. The trust-building machine that works while you sleep: • Show up reliably when competitors disappear during tough times • Do what you promise, when you promise it • Maintain quality across every touchpoint One founder implemented this and saw conversions increase 74% in 30 days, without changing offer or price. Trust isn't built in grand gestures. It's built in boring consistency, most won't maintain. Principle 3: Authentic Positioning The approach that helped me raise prices 300% while increasing sales: • Own your expertise unapologetically, confidence is not arrogance • Speak to specific problems you solve, not vague benefits you provide • Tell detailed stories of transformation instead of listing features You don't need to be perfect to sell effectively. You need to be authentic about how you help. Principle 4: Invitation Vs. Manipulation The ethical alternative to high-pressure tactics: • Invite people when they're ready, don't push when you're ready • Create genuine scarcity (limited capacity) not fake urgency (countdown timers) • Respect "no" as "not now" rather than objection to overcome My most profitable sales sequence has zero countdown timers, zero artificial scarcity, zero pressure. Ethical selling feels like extending help, not hunting prey. — Enjoy this? ♻️ Repost it to your network and follow Matt Gray for more. Want to improve your sales strategy? Join our community of 172,000+ subscribers today: https://lnkd.in/eTp4jain
Digital Design Ethics
Explore top LinkedIn content from expert professionals.
-
-
🧭 How To Manage Challenging Stakeholders and Influence Without Authority (free eBook, 95 pages) (https://lnkd.in/e6RY6dQB), a practical guide on how to deal with difficult stakeholders, manage difficult situations and stay true to your product strategy. From HiPPOs (Highest Paid Person’s Opinion) to ZEbRAs (Zero Evidence But Really Arrogant). By Dean Peters. Key takeaways: ✅ Study your stakeholders as you study your users. ✅ Attach your decisions to a goal, metric, or a problem. ✅ Have research data ready to challenge assumptions. ✅ Explain your tradeoffs, decisions, customer insights, data. 🚫 Don’t hide your designs: show unfinished work early. ✅ Explain the stage of your work and feedback you need. ✅ For one-off requests, paint and explain the full picture. ✅ Create a space for small experiments to limit damage. ✅ Build trust for your process with regular key updates. 🚫 Don’t invite feedback on design, but on your progress. As designers, we often sit on our work, waiting for the perfect moment to show the grand final outcome. Yet one of the most helpful strategies I’ve found is to give full, uncensored transparency about the work we are doing. The decision making, the frameworks we use to make these decisions, how we test, how we gather insights and make sense of them. Every couple of weeks I would either write down or record a short 3–4 mins video for stakeholders. I explain the progress we’ve made over the weeks, how we’ve made decisions and what our next steps will be. I show the design work done and abandoned, informed by research, refined by designers, reviewed by engineers, finetuned by marketing, approved by other colleagues. I explain the current stage of the design and what kind of feedback we would love to receive. I don’t really invite early feedback on the visual appearance or flows, but I actively invite agreement on the general direction of the project — for that stakeholders. I ask if there is anything that is quite important for them, but that we might have overlooked in the process. It’s much more difficult to argue against real data and a real established process that has led to positive outcomes over the years. In fact, stakeholders rarely know how we work. They rarely know the implications and costs of last-minute changes. They rarely see the intricate dependencies of “minor adjustments” late in the process. Explain how your work ties in with their goals. Focus on the problem you are trying to solve and the value it delivers for them — not the solution you are suggesting. Support your stakeholders, and you might be surprised how quickly you might get the support that you need. Useful resources: The Delicate Art of Interviewing Stakeholders, by Dan Brown 🤎 https://lnkd.in/dW5Wb8CK Good Questions For Stakeholders, by Lisa Nguyen, Cori Widen https://lnkd.in/eNtM5bUU UX Research to Win Over Stubborn Stakeholders, by Lizzy Burnam 🐞 https://lnkd.in/eW3Yyg5k [continues below ↓] #ux #design
-
The guide "AI Fairness in Practice" by The Alan Turing Institute from 2023 covers the concept of fairness in AI/ML contexts. The fairness paper is part of the AI Ethics and Governance in Practice Program (link: https://lnkd.in/gvYRma_R). The paper dives deep into various types of fairness: DATA FAIRNESS includes: - representativeness of data samples, - collaboration for fit-for-purpose and sufficient data quantity, - maintaining source integrity and measurement accuracy, - scrutinizing timeliness, and - relevance, appropriateness, and domain knowledge in data selection and utilization. APPLICATION FAIRNESS involves considering equity at various stages of AI project development, including examining real-world contexts, addressing equity issues in targeted groups, and recognizing how AI model outputs may shape decision outcomes. MODEL DESIGN AND DEVELOPMENT FAIRNESS involves ensuring fairness at all stages of the AI project workflow by - scrutinizing potential biases in outcome variables and proxies during problem formulation, - conducting fairness-aware design in preprocessing and feature engineering, - paying attention to interpretability and performance across demographic groups in model selection and training, - addressing fairness concerns in model testing and validation, - implementing procedural fairness for consistent application of rules and procedures. METRIC-BASED FAIRNESS utilizes mathematical mechanisms to ensure fair distribution of outcomes and error rates among demographic groups, including: - Demographic/Statistical Parity: Equal benefits among groups. - Equalized Odds: Equal error rates across groups. - True Positive Rate Parity: Equal accuracy between population subgroups. - Positive Predictive Value Parity: Equal precision rates across groups. - Individual Fairness: Similar treatment for similar individuals. - Counterfactual Fairness: Consistency in decisions. The paper further covers SYSTEM IMPLEMENTATION FAIRNESS, incl. Decision-Automation Bias (Overreliance and Overcompliance), Automation-Distrust Bias, contextual considerations for impacted individuals, and ECOSYSTEM FAIRNESS. -- Appendix A (p 75) lists Algorithmic Fairness Techniques throughout the AI/ML Lifecycle, e.g.: - Preprocessing and Feature Engineering: Balancing dataset distributions across groups. - Model Selection and Training: Penalizing information shared between attributes and predictions. - Model Testing and Validation: Enforcing matching false positive/negative rates. - System Implementation: Allowing accuracy-fairness trade-offs. - Post-Implementation Monitoring: Preventing model reliance on sensitive attributes. -- The paper also includes templates for Bias Self-Assessment, Bias Risk Management, and a Fairness Position Statement. -- Link to authors/paper: https://lnkd.in/gczppH29 #AI #Bias #AIfairness
-
Design is not production. It’s responsibility. One of the most popular design models in the past two decades is the Double Diamond: Discover → Define → Develop → Deliver. Problem → Solution. Done. But here’s the problem: it ends at delivery. It implies that our work as designers is finished once something is shipped or launched. And that’s dangerously incomplete. What’s missing is a Third Diamond—one that deals with consequences: - What happens after we put something into the world? - How does it affect people? Communities? The environment? This is where the real responsibility of design begins. We need to expand the scope of our process: - Not just designing for users, but designing with and around complex systems. - Not just celebrating creation, but confronting impact. It’s time we move beyond the seductive allure of solutionism and start asking harder questions about our outputs. This post is the first in a new series I’m calling “The Third Diamond”, based on themes from my book The New Designer: Rejecting Myths, Embracing Change (MIT Press). More to come. What would your third diamond include? #design #impact #framework #systems #process #responsibility #sustainability #ethicaldesign #thenewdesigner #thirddiamond #beyonddelivery
-
I wasn’t actively looking for this book, but it found me at just the right time. Fairness and Machine Learning: Limitations and Opportunities by Solon Barocas, @Moritz Hardt, and Arvind Narayanan is one of those rare books that forces you to pause and rethink everything about AI fairness. It doesn’t just outline the problem—it dives deep into why fairness in AI is so complex and how we can approach it in a more meaningful way. A few things that hit home for me: →Fairness isn’t just a technical problem; it’s a societal one. You can tweak a model all you want, but if the data reflects systemic inequalities, the results will too. → There’s a dangerous overreliance on statistical fixes. Just because a model achieves “parity” doesn’t mean it’s truly fair. Metrics alone can’t solve fairness. → Causality matters. AI models learn correlations, not truths, and that distinction makes all the difference in high-stakes decisions. → The legal system isn’t ready for AI-driven discrimination. The book explores how U.S. anti-discrimination laws fail to address algorithmic decision-making and why fairness cannot be purely a legal compliance exercise. So, how do we fix this? The book doesn’t offer one-size-fits-all solutions (because there aren’t any), but it does provide a roadmap: → Intervene at the data level, not just the model. Bias starts long before a model is trained—rethinking data collection and representation is crucial. → Move beyond statistical fairness metrics. The book highlights the limitations of simplistic fairness measures and advocates for context-specific fairness definitions. → Embed fairness in the entire ML pipeline. Instead of retrofitting fairness after deployment, it should be considered at every stage—from problem definition to evaluation. → Leverage causality, not just correlation. Understanding the why behind patterns in data is key to designing fairer models. → Rethink automation itself. Sometimes, the right answer isn’t a “fairer” algorithm—it’s questioning whether an automated system should be making a decision at all. Who should read this? 📌 AI practitioners who want to build responsible models 📌 Policymakers working on AI regulations 📌 Ethicists thinking beyond just numbers and metrics 📌 Anyone who’s ever asked, Is this AI system actually fair? This book challenges the idea that fairness can be reduced to an optimization problem and forces us to confront the uncomfortable reality that maybe some decisions shouldn’t be automated at all. Would love to hear your thoughts—have you read it? Or do you have other must-reads on AI fairness? 👇 ↧↧↧↧↧↧↧ Share this with your network ♻️ Follow me (Aishwarya Srinivasan) for no-BS AI news, insights, and educational content!
-
Not everything that is new is better. Tech-bro and commercial design methods often over-emphasise newness, tech-solutions and fail to notice what we destroy, displace or devalue through our designing. When we bring those tools into social design and co-design we can also bring harm and undermine current change efforts. When we're surrounded by same-ness we often get carried away and excited by our own ideas. Design isn't only rooted in capitalism, it's also rooted in communities, in many histories and many possible futures. Pictured is the Oracle for Transfeminist Technologies co-produced by Coding Rights and Design Justice Network. How are we thinking about and practising: 🌱 Knowing where our (co)design methods come from. And, in whose interests and worldview they were made. 💗 Being new to a context, community or issue - noticing how our enthusiasm and fresh urgency sits alongside the histories, wisdom, love and rage of folks who are already there, doing the work. Are we listening - or are we centering ourselves? 🔒 Noticing whose ideas are valued and listened to. And, whose values, bodies and experiences lead the (co)design process? For example, in coming up with ideas, assessing ideas and implementing ideas. 👀 Noticing what's already happening in a context, workplace, system or community. As the Design Justice Network principle's suggest, how are we working in solidarity with what's there? How hard are we trying to find it. 💤 Collectively dreaming futures that aren't focused solely on technology. Starting with what's *really* needed before assuming technology (or a new app) is the best way or the only way. 🌑 Noticing what we're erasing, overlooking or over-shadowing through our designing. Exploring potential consequences before we take action. ⁉️ What else should we think about and practise? More about the oracle: https://lnkd.in/g8PgUFtR #CoDesign #DesignJustice
-
𝟔𝟔% 𝐨𝐟 𝐀𝐈 𝐮𝐬𝐞𝐫𝐬 𝐬𝐚𝐲 𝐝𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐬 𝐭𝐡𝐞𝐢𝐫 𝐭𝐨𝐩 𝐜𝐨𝐧𝐜𝐞𝐫𝐧. What does that tell us? Trust isn’t just a feature - it’s the foundation of AI’s future. When breaches happen, the cost isn’t measured in fines or headlines alone - it’s measured in lost trust. I recently spoke with a healthcare executive who shared a haunting story: after a data breach, patients stopped using their app - not because they didn’t need the service, but because they no longer felt safe. 𝐓𝐡𝐢𝐬 𝐢𝐬𝐧’𝐭 𝐣𝐮𝐬𝐭 𝐚𝐛𝐨𝐮𝐭 𝐝𝐚𝐭𝐚. 𝐈𝐭’𝐬 𝐚𝐛𝐨𝐮𝐭 𝐩𝐞𝐨𝐩𝐥𝐞’𝐬 𝐥𝐢𝐯𝐞𝐬 - 𝐭𝐫𝐮𝐬𝐭 𝐛𝐫𝐨𝐤𝐞𝐧, 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 𝐬𝐡𝐚𝐭𝐭𝐞𝐫𝐞𝐝. Consider the October 2023 incident at 23andMe: unauthorized access exposed the genetic and personal information of 6.9 million users. Imagine seeing your most private data compromised. At Deloitte, we’ve helped organizations turn privacy challenges into opportunities by embedding trust into their AI strategies. For example, we recently partnered with a global financial institution to design a privacy-by-design framework that not only met regulatory requirements but also restored customer confidence. The result? A 15% increase in customer engagement within six months. 𝐇𝐨𝐰 𝐜𝐚𝐧 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐫𝐞𝐛𝐮𝐢𝐥𝐝 𝐭𝐫𝐮𝐬𝐭 𝐰𝐡𝐞𝐧 𝐢𝐭’𝐬 𝐥𝐨𝐬𝐭? ✔️ 𝐓𝐮𝐫𝐧 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐧𝐭𝐨 𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐦𝐞𝐧𝐭: Privacy isn’t just about compliance. It’s about empowering customers to own their data. When people feel in control, they trust more. ✔️ 𝐏𝐫𝐨𝐚𝐜𝐭𝐢𝐯𝐞𝐥𝐲 𝐏𝐫𝐨𝐭𝐞𝐜𝐭 𝐏𝐫𝐢𝐯𝐚𝐜𝐲: AI can do more than process data, it can safeguard it. Predictive privacy models can spot risks before they become problems, demonstrating your commitment to trust and innovation. ✔️ 𝐋𝐞𝐚𝐝 𝐰𝐢𝐭𝐡 𝐄𝐭𝐡𝐢𝐜𝐬, 𝐍𝐨𝐭 𝐉𝐮𝐬𝐭 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞: Collaborate with peers, regulators, and even competitors to set new privacy standards. Customers notice when you lead the charge for their protection. ✔️ 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐀𝐧𝐨𝐧𝐲𝐦𝐢𝐭𝐲: Techniques like differential privacy ensure sensitive data remains safe while enabling innovation. Your customers shouldn’t have to trade their privacy for progress. Trust is fragile, but it’s also resilient when leaders take responsibility. AI without trust isn’t just limited - it’s destined to fail. 𝐇𝐨𝐰 𝐰𝐨𝐮𝐥𝐝 𝐲𝐨𝐮 𝐫𝐞𝐠𝐚𝐢𝐧 𝐭𝐫𝐮𝐬𝐭 𝐢𝐧 𝐭𝐡𝐢𝐬 𝐬𝐢𝐭𝐮𝐚𝐭𝐢𝐨𝐧? 𝐋𝐞𝐭’𝐬 𝐬𝐡𝐚𝐫𝐞 𝐚𝐧𝐝 𝐢𝐧𝐬𝐩𝐢𝐫𝐞 𝐞𝐚𝐜𝐡 𝐨𝐭𝐡𝐞𝐫 👇 #AI #DataPrivacy #Leadership #CustomerTrust #Ethics
-
💡Sustainability in Digital Product Design Sustainable digital product design involves reducing environmental impact while ensuring the long-term viability and ethical use of resources. 🍏 Efficient code and architecture ✔ Optimize performance. Optimize code efficiency to minimize resource usage, reducing server load and energy consumption. This leads not only to faster apps (better UX) but will also require less computing power (less resource consumption). ✔ Use serverless architectures: Cloud environment will help allocate resources dynamically and scale only when needed. 🍏 Minimize data transfers ✔ Compress images and files: Optimize media files (e.g., using WebP format) to reduce bandwidth usage. ✔ Lazy loading: Load assets like images or content only when they are about to be viewed, reducing the initial data transfer. ✔ Cache effectively: Use browser and server-side caching to reduce repeated requests for the same data. 🍏 Reduce digital waste ✔ Minimize features: Avoid feature bloat (https://lnkd.in/dhi7njem) by focusing on essential features that bring value to users and reduce unnecessary complexity. ✔ Encourage responsible consumption: Design apps and websites that encourage users to interact more consciously. For example, encourage the reuse of existing templates & presets and discourage users from PDF exports in favor of shared URLs. 🍏 Sustainable UI ✔ Dark mode and energy-saving themes: Offer dark mode (https://lnkd.in/dirmHHt4) and energy-saving themes for OLED screens to reduce battery consumption. ✔ Accessibility: Build products that are accessible (https://lnkd.in/dabxuGwc) to everyone. Be intentional with default settings for your users. 🍏 Ethical data use ✔ Data minimization: Only collect and store data that is necessary for the function of the product. Reducing data storage saves energy in the long term. Establish a data archiving/deletion policy in your company. ✔ Screen time reduction: Aim to reduce session duration instead of increasing it. ✔ Privacy by design: Build privacy into the product architecture from the start to avoid the need for resource-intensive changes later. 🛠 Tools ✔ Designing for Sustainability, toolkit and checklist (by IBM) https://lnkd.in/dXcYSEmk ✔ Sustainable Product Design, template (by Artiom Dashinsky) https://lnkd.in/dv_KnU3y ✔ Sustainability Innovation Framework, FigJam (by Sebastian Gier ✓) https://lnkd.in/dHnP2hzC 🖼 Checklist for sustainability by IBM #design #sustainability #productdesign #UI #uidesign
-
"The field of embodied AI (EAI) is rapidly advancing. Unlike virtual AI, EAI systems can exist in, learn from, reason about, and act in the physical world. With recent advances in AI models and hardware, EAI systems are becoming increasingly capable across wider operational domains. While EAI systems can offer many benefits, they also pose significant risks, including physical harm from malicious use, mass surveillance, as well as economic and societal disruption. These risks require urgent attention from policymakers, as existing policies governing industrial robots and autonomous vehicles are insufficient to address the full range of concerns EAI systems present. To help address this issue, this paper makes three contributions. First, we provide a taxonomy of the physical, informational, economic, and social risks EAI systems pose. Second, we analyze policies in the US, EU, and UK to assess how existing frameworks address these risks and to identify critical gaps. We conclude by offering policy recommendations for the safe and beneficial deployment of EAI systems, such as mandatory testing and certification schemes, clarified liability frameworks, and strategies to manage EAI’s potentially transformative economic and societal impacts" Jared Perlo Centre for the Governance of AI (GovAI) Centre pour la Sécurité de l'IA - CeSIA) Alex Robey Fazl Barez Luciano Floridi Jakob Mökander Tony Blair Institute for Global Change Digital Ethics Center (DEC), Yale University
-
Privacy by Design and Default are more than just buzzwords They're fundamental principles that can make or break trust with your users. Yet, many professionals still struggle to grasp their importance. Let me break it down using a timely example from Telegram Messenger: The Good: Privacy by Design Telegram gets it right when it comes to Privacy by Design. Their settings are a masterclass in giving users control, offering three privacy levels for most options: -Everybody - Contacts - Nobody This shows they’ve integrated privacy into the very fabric of their app, giving users the power to decide who sees what (in the design). The Miss: Privacy by Default But here’s where Telegram drops the ball—Privacy by Default. Despite offering granular privacy controls, all options are set to ‘Everybody’ by default. This is a major oversight. Why does this matter? Privacy by Default means that the most secure, private setting should be the default. Telegram should have set all options to ‘Nobody’ by default, allowing users to opt into less privacy if they choose. This approach not only protects users but also demonstrates a commitment to their privacy from the get-go. A Timely Reminder: The recent arrest of Telegram’s CEO highlights the importance of getting privacy right—it’s not just about ticking boxes; it’s about safeguarding your users and the integrity of your platform. In an era where trust is easily lost, these principles are not optional; They are essential. Your Actionable Takeaways: Embed Privacy by Design: Start with privacy as a core principle, not an afterthought. Make it easy for users to control their data. Default to Safety: Always set the most private option as the default. This small step goes a long way in protecting users. Educate and Empower: Make sure your team understands these principles and can apply them. Privacy isn’t just the responsibility of the legal team—it’s everyone’s job. The Bottom Line: In today’s digital landscape, privacy is power. Build it into your product from the start. Always put your users first by defaulting to the most privacy friendly settings. This ensures compliance and also builds a foundation of trust that will set you apart. ----------- 👋 I'm Jamal! I want to help you become a world-class privacy expert so you can have the thriving career you deserve. 🔔 Hit that bell for more inspiration, insights and tips. ♻ You've made it this far - so why don't you, repost to your network now so they can benefit too
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Healthcare
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Innovation
- Event Planning
- Training & Development