1/ Is an email address (e.g., name@emailservice.com) considered personal data under the Digital Personal Data Protection Act, 2023 (DPDPA)? - Yes. Under Section 2(t) of the DPDPA, personal data means any data about an individual who is identifiable by or in relation to such data. An email address, particularly one that identifies a specific individual (e.g., amitshah.bjp@zohomail.in), clearly falls under this definition. Therefore, it qualifies as personal data. 2. When an individual publicly posts their email address on a social media platform (like X/Twitter), does that count as a public disclosure or private disclosure under the DPDPA? - Such an act qualifies as a public disclosure because the data principal (Amit Shah) voluntarily made the data available in a publicly accessible domain. The DPDPA, while not using the specific term “public disclosure,” implicitly acknowledges in Section 4(1)(b) that consent is not required when data has been made publicly available by the data principal themselves. Thus, this situation falls under lawful processing without consent. 3. If the account were private, and one of the followers disclosed the email address publicly, would that be treated as a data breach? - Yes. In such a case, the original disclosure was made within a restricted audience (followers). If a follower then shares that personal data publicly without authorization, it could constitute an unauthorized disclosure and hence a personal data breach under Section 8(6) of the DPDPA, which requires data fiduciaries to prevent unauthorized access, disclosure, or sharing of personal data. 4. Does the fact that an email address is publicly disclosed give companies the right to send marketing or promotional emails, claiming the data is from the public domain? - No. Even if data is publicly available, the purpose limitation principle under Section 6(1) applies. Personal data made public for one purpose (e.g., communication with constituents) cannot be reused for another (e.g., marketing) without consent or legitimate basis under Section 4(1)(b). Additionally, sending unsolicited marketing emails could violate Section 9(2), which requires data fiduciaries to process data only for lawful purposes and with reasonable expectations of the data principal. Hence, using such publicly available personal data for marketing purposes would not be lawful under the DPDPA unless consent is explicitly obtained. Disclaimer: The above analysis is provided for general informational purposes only and does not constitute legal advice. Interpretations are based on the Digital Personal Data Protection Act, 2023 (DPDPA) as of the current date. #DPDPA #DataPrivacy #DataProtection #DPDP #Concur #ConsentManager
Personal Data Usage Guidelines
Explore top LinkedIn content from expert professionals.
Summary
Personal data usage guidelines are rules and practices that help organizations handle information about individuals—like names, email addresses, or sensitive data—in a way that protects privacy and follows legal requirements. These guidelines cover how data is collected, used, shared, and kept secure, ensuring people have control over their own information.
- Clarify data purposes: Always make it clear why you are collecting personal data and only use it for the reasons you explained to the individual.
- Respect consent choices: Give people real options to agree, refuse, or change their consent for how their data is used, including marketing and communication preferences.
- Secure and review data: Protect personal information with security measures and conduct regular reviews to make sure your data practices follow privacy laws and meet customer expectations.
-
-
All organizations must comply with evolving privacy regulations and meet customer expectations. Clarity on what needs to be managed is critical. These are three key areas to focus on: 1) Privacy Rights Requests. 2) Consent & Communication Preference 3) Cookie Consent Management. Here are details: 1) Privacy Rights Requests (DSRs) These rights are governed by laws like GDPR (EU), CCPA (US), etc. They empower individuals to control their personal data, including: -- Access, Delete, Correct, Portability. Example: “Send me all data you have about me” -- Restrict Processing, Withdraw Consent. Example: “Pause processing my data for marketing” -- Object to Automated Decisions Example: “Request human review of a loan application instead of relying solely on an algorithm.” -- Opt-Out of Sale/Sharing Example: “Do not sell my data to third parties” (CCPA) -- Limit Sensitive Data Use Example: “Restrict use of my health data for analytics” 2) Consent & Communication Preferences Governed by: GDPR, TCPA (US), CAN-SPAM (US), CASL (Canada), etc These preferences give customers control over following engagement: -- Marketing opt-in/out (email, SMS, calls) Example: “Subscribe to product updates via email” -- Transactional notifications Example: “Receive SMS for delivery status” -- Terms acceptance Example: “Agree to app Terms of Service before use” -- Sensitive data consent Example: “Allow use of biometric data for authentication” -- Frequency & channel preferences Example: “Send me monthly newsletters, not weekly” 3) Cookie Consent Management These are governed by: ePrivacy Directive (EU), GDPR, CPRA, etc They ensure transparency and compliance with tracking technologies: -- Published cookie policy Example: “View detailed cookie categories on website” -- Consent banners (accept/reject/preferences) Example: “Choose analytics cookies only” -- Block non-essential cookies until consent Example: “No ad tracking until user opts in” -- Record and audit consent Example: “Store timestamp of user’s cookie choice” -- Editable/revocable consent Example: “Change cookie settings anytime via footer link” -- Essential cookies exempt Example: “Session cookies for login remain active”
-
If you are an organisation using AI or you are an AI developer, the Australian privacy regulator has just published some vital information about AI and your privacy obligations. Here is a summary of the new guides for businesses published today by the Office of the Australian Information Commissioner which articulate how Australian privacy law applies to AI and set out the regulator’s expectations. The first guide is aimed to help businesses comply with their privacy obligations when using commercially available AI products and help them to select an appropriate product. The second provides privacy guidance to developers using personal information to train generative AI models. GUIDE ONE: Guidance on privacy and the use of commercially available AI products Top five takeaways * Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information). * Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI * If AI systems are used to generate or infer personal information, including images, this is a collection of personal information and must comply with APP 3 (which deals with collection of personal info). * If personal information is being input into an AI system, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected. * As a matter of best practice, the OAIC recommends that organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools. GUIDE 2: Guidance on privacy and developing and training generative AI models Top five takeaways * Developers must take reasonable steps to ensure accuracy in generative AI models. * Just because data is publicly available or otherwise accessible does not mean it can legally be used to train or fine-tune generative AI models or systems.. * Developers must take particular care with sensitive information, which generally requires consent to be collected. * Where developers are seeking to use personal information that they already hold for the purpose of training an AI model, and this was not a primary purpose of collection, they need to carefully consider their privacy obligations. * Where a developer cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to a primary purpose, to avoid regulatory risk they should seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out of such a use. https://lnkd.in/gX_FrtS9
-
In my work as a Data Privacy Consultant, I've seen many companies overlook the importance of a clearly defined Internal Privacy Policy. Basically, it's like having a rulebook that guides how everyone in the company handles personal data and helps in setting the tone of a privacy centric culture in the business. Here are some points that I believe should be incorporated in the policy: 1️. Data Classification & Collection Principles: For instance, classifying customer data into categories like personal information, transaction history, and preferences, while ensuring that only necessary data is collected and with explicit user consent. 2️. Data Protection & Retention: Implementing encryption methods to protect customer data during storage and determining that customer contact information will be retained for five years after the termination of their account. 3️. Sensitive Data Handling: Establishing a protocol that only authorized personnel can access medical records in a healthcare organization and that any printed copies must be shredded after use. 4️. Data Sharing Protocols: Setting up a secure file-sharing system for internal collaboration and ensuring that external partners sign data processing agreements before accessing any shared data. 5️. Department-Specific Policies: Developing specific privacy guidelines for the marketing department to ensure compliance with regulations when conducting targeted advertising campaigns. 6️. Privacy Review & Response Centre: Conducting quarterly privacy audits to evaluate data handling practices and establishing a dedicated email address for privacy-related inquiries for customers to submit their concerns. 7️. Privacy Inquiry & Data Request Procedures: Creating a standardized form for customers to request access to their personal data and establishing a process to verify their identity before releasing any information. This list isn't exhaustive, and it's important to craft the policy according to the organization's specific needs and how it operates in practice. Just relying on a consultant to create a standard document might not fully meet your business goals. It's better for the organisation to be actively involved in the process 😊
-
CNIL's AI sheets: AI systems and data subjects' rights. Key takeaways: When guiding on how to address data subject's rights, CNIL differentiates between (1) training datasets and (2) AI models. (1) For the exercise of rights on training datasets: - If the controller cannot identify an individual in the training dataset to address its rights, CNIL reminds that an individual, per the GDPR, can still provide additional information to allow them to be identified. Therefore, the regulator recommends retaining a certain amount of metadata on the source of the data collection to search for a person or data within the dataset. - Right of access: (1) Recipients of data: The organization can provide categories of recipients only if it is impossible to identify them precisely; still, CNIL recommends setting up an authentication or API mechanism to record the identities of third parties and the data accessed to be able to address this right. (2) Source of data: When the training dataset is scraped, retain the domain names and URLs of the web pages where the data was collected to transmit to the data subjects who request them. When re-using a dataset accessible online, retain the identity of the source controller. (3) Copy of data: CNIL recommends providing the data and associated annotations and metadata in an easily understandable format considering the dataset holder's intellectual property rights or trade secrets. (2) For the exercise of rights on models: - Applicability of the personal data concept to GenAI models: CNIL says that outputs from a generative AI model may be considered personal data when they relate to an identified or identifiable natural person, regardless of their accuracy. Still, the provider of the gen-AI system will not be responsible for processing the personal data contained in the outputs if it does not result from the "memory" of the model but from statistical inference from the personal data provided in the prompt. In this case, processing such data will be the responsibility of the system user. - Identification of the data subject within the model: CNIL says that even though current techniques are not good enough to identify personal data from the model's weights, there are still cases, such as when the model parameters explicitly contain specific training data (e.g., for support vector machines). For example, if LLM was trained on the scraped data, a company might ask a data subject to share the URL of the page concerned, the relevant field, so it can identify its data and address individual rights if possible. - Also, CNIL reminds again to prioritize anonymizing training data or measures preventing memorization or regurgitation, as re-training models and filtering output to address rights is still a heavy lift. #GDPR #privacy #AI
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development