Deepfake Nudify (Wired) - CISSP Exam Practice Test (Deep Dive)

Apr 30, 2026
 

We start with a hard look at the rise of AI-generated deepfake nude images targeting students and why cybersecurity pros can help schools respond before harm spreads. Then we switch gears into CISSP-style questions that sharpen risk math, governance decisions, and modern security architecture choices.
• how “nudify” deepfake tools work and why they are spreading fast
• the role of bots and platforms in scaling abuse
• concrete ways CISOs and security leaders can support school districts
• incident response checklists for evidence, takedowns, and notification
• building age-appropriate security awareness without creating copycats
• quantitative risk assessment using ALE, SLE, and ARO for control selection
• GDPR Article 22 transparency and governance controls with explainability
• post-quantum cryptography planning for long-term data retention
• SSD sanitisation under CCPA using cryptographic erasure and key destruction
• zero trust for 5G IoT using software-defined perimeter enforcement

If you like what you heard, please leave a review on iTunes as I would greatly appreciate your feedback.
Also, check out my videos that are on YouTube, and just head to my channel at CISSP Cyber Training, and you will find a plethora or iconicopia of content to help you pass the CISSP exam the first time.
Lastly, head to CISSP Cyber Training and sign up for 360 free CISSP questions to help you in your CISSP journey.

CISSP Risk Quantification & Emerging Threats: ALE/ARO, Post-Quantum Encryption, Zero Trust IoT, and More (Domains 1, 2 & 3)

CISSP candidates must master risk quantification formulas — including ALE, SLE, and ARO — alongside emerging topics like post-quantum cryptography, GDPR Article 22 compliance, SSD data sanitization, and Zero Trust architecture for IoT. This post walks through five exam-style practice questions, explains the reasoning behind each correct answer, and extracts the broader principles you need to apply on test day and in the field.

How Do You Calculate ARO from ALE and SLE on the CISSP Exam?

Risk quantification questions are a staple of CISSP Domain 1 (Security and Risk Management). The formula chain is short but must be second nature:

Core Risk Formulas
ALE = SLE × ARO
ARO = ALE ÷ SLE
SLE = Asset Value × Exposure Factor

Annual Loss Expectancy (ALE) is the total expected loss per year from a given threat. Single Loss Expectancy (SLE) is the expected loss from a single incident. Annual Rate of Occurrence (ARO) is how many times that incident is expected to happen per year.

Practice Question 1 — IoT Supply Chain Risk

A global organization faces state-sponsored supply chain attacks targeting firmware in IoT devices. A quantitative risk assessment reveals an ALE of $4,000,000 and an SLE of $800,000. Mitigation costs are $500,000 annually. What is the ARO, and which response is most appropriate?

A — ARO = 5; mitigate via SBOM validation
B — ARO = 8; transfer risk via cyber insurance
C — ARO = 4; avoid by discontinuing IoT
D — ARO = 5; accept and monitor with threat intel

Step 1 — Calculate ARO: $4,000,000 ÷ $800,000 = 5. Eliminate B and C immediately.

Step 2 — Choose between A and D: Option D (accept and monitor) is appropriate only when mitigation cost exceeds the ALE. Here, the $500,000 mitigation cost is far less than the $4,000,000 ALE, so mitigation is clearly justified. Option A applies a Software Bill of Materials (SBOM) — a NIST SP 800-161-aligned control that enforces firmware component transparency and directly reduces supply chain risk.

Exam answer: A — ARO = 5; mitigate by implementing SBOM validation. Mitigation is justified when the annual mitigation cost is less than the ALE.
Exam tip: When you see two answer choices sharing the same ARO, the tiebreaker is always the cost-benefit logic. If mitigation cost < ALE, mitigate. If mitigation cost > ALE, accept or transfer.

What GDPR Article 22 Governance Control Addresses AI Profiling Transparency?

GDPR Article 22 grants individuals rights related to automated individual decision-making, including profiling. It requires transparency, human review mechanisms, and the ability for individuals to contest automated decisions. When an AI-driven customer profiling system is found to violate these transparency requirements, organizations need a governance control that directly addresses the compliance gap — not a band-aid.

Practice Question 2 — GDPR Article 22 AI Compliance

During a GDPR compliance review, a company discovers its AI-driven customer profiling violates transparency requirements under Article 22. Which governance control best addresses this to align with ethical AI principles?

A — Update data retention policy to limit AI data storage
B — Conduct mandatory AI ethics training for developers
C — Outsource AI compliance to third-party auditors
D — Implement a Privacy Impact Assessment with explainability metrics

Option A addresses data storage, not automated decision-making transparency. Option B improves future development practices but does not remediate the existing violation. Option C delegates without fixing. A Privacy Impact Assessment (PIA) — also called a Data Protection Impact Assessment (DPIA) under GDPR — directly assesses the profiling activities, documents risks, and establishes explainability metrics that demonstrate how automated decisions are made and how individuals can contest them.

Exam answer: D — Implement a Privacy Impact Assessment with explainability metrics. A PIA directly addresses Article 22's transparency and accountability requirements for automated decision-making.
Exam tip: On compliance questions, look for answers that address the root cause strategically, not just tactically. Training and auditing are supporting controls — they don't fix a discovered violation.

What Is the Best Encryption Method to Protect Data Against Future Quantum Decryption?

This is a rapidly growing CISSP topic. Harvest now, decrypt later (HNDL) attacks involve adversaries capturing encrypted data today with the intent to decrypt it once quantum computing becomes viable. For data with long retention requirements, current encryption standards are insufficient.

Practice Question 3 — Post-Quantum Encryption for Classified Research Data

An organization classifies quantum research data as top secret with a 15-year retention requirement. Which storage method best protects against quantum decryption threats?

A — AES-256 stored in a FIPS 140-3 validated HSM
B — NIST-approved post-quantum algorithm (CRYSTALS-Kyber)
C — Data masking and tokenization for archival
D — Sharding across geographically dispersed clouds

AES-256 and FIPS 140-3 Hardware Security Modules (HSMs) represent best-practice controls for today's threat landscape — but AES is vulnerable to Grover's algorithm on a sufficiently powerful quantum computer. Data masking and tokenization are useful for limiting exposure but don't protect the underlying data against quantum decryption over a 15-year window. Geographic sharding distributes exposure but does not address cryptographic weakness. CRYSTALS-Kyber is a post-quantum cryptographic (PQC) algorithm standardized by NIST (FIPS 203) that is designed to resist attacks from both classical and quantum computers.

Exam answer: B — NIST-approved post-quantum algorithm (CRYSTALS-Kyber). For data with multi-year retention, PQC algorithms are required to withstand harvest-now-decrypt-later attacks.
Critical distinction: AES-256 + HSM is the correct answer for current data protection. CRYSTALS-Kyber (or similar PQC algorithms) is the correct answer when the question includes a long retention period or explicit quantum threat context.

Which Sanitization Method Works for Solid-State Drives (SSDs) Under CCPA?

Data sanitization questions require knowing which physical or logical method applies to which storage media. The CISSP exam tests this precisely — methods that work on magnetic hard drives often fail on SSDs.

Practice Question 4 — SSD Sanitization for CCPA Compliance

To comply with CCPA's data minimization principle, a company must securely dispose of obsolete customer records on solid-state drives (SSDs). Which sanitization method ensures no residual data remains?

A — Multi-pass overwrite (DoD 5220.22-M standard)
B — Degaussing followed by sanitization verification
C — Cryptographic erasure with secure key destruction
D — Factory reset with secure boot reinitialization

Multi-pass overwrite (DoD 5220.22-M) is effective for magnetic spinning hard drives but unreliable on SSDs due to wear leveling — the SSD controller remaps write operations across cells, meaning overwrite passes do not reliably reach all locations where data was stored. Degaussing works by disrupting magnetic fields; SSDs store data in NAND flash cells and are not affected by magnetic fields. A factory reset reinitializes the operating environment but leaves data intact on the flash chips. Cryptographic erasure works by encrypting all data with a key and then destroying the key — rendering all stored data permanently unrecoverable. This is the NIST SP 800-88 recommended method for SSDs intended for reuse.

Exam answer: C — Cryptographic erasure with secure key destruction. Per NIST SP 800-88, this is the approved method for SSD sanitization when the device will be repurposed.
Exam tip: Physical destruction (shredding, disintegration) is also valid for SSDs that will not be reused. If the question specifies repurposing or redeployment, cryptographic erasure is the answer. If disposal is permanent, physical destruction is equally correct.

What Enforces Least Privilege for IoT Devices in a Zero Trust 5G Architecture?

Zero Trust Architecture (ZTA) operates on the principle of "never trust, always verify" — no device, user, or session is implicitly trusted based on network location. In 5G edge environments with IoT devices, least privilege must be enforced dynamically at the device identity level, not at the perimeter.

Practice Question 5 — Zero Trust IoT on 5G Edge Networks

Designing a Zero Trust architecture for a 5G edge network — which component best enforces least privilege for IoT device communications?

A — Next-generation firewall with deep packet inspection
B — Software-Defined Perimeter (SDP) with dynamic policy enforcement
C — Trusted Platform Module (TPM) for device attestation
D — API gateway with OAuth 2.0 authorization

An NGFW with DPI provides strong traffic inspection but enforces policies based on network segments, not individual device identity. A Trusted Platform Module (TPM) is a hardware security chip that supports device attestation — it verifies device integrity but does not itself enforce least-privilege access policies for communications. An API gateway with OAuth 2.0 adds access control for API calls but lacks the granular, per-device dynamic policy enforcement needed for heterogeneous IoT fleets. A Software-Defined Perimeter (SDP) implements Zero Trust by requiring every device to authenticate and receive dynamically scoped network access — only what that device needs, at that moment, is permitted. This is device-specific, context-aware, and ideal for 5G's distributed, high-density IoT environments.

Exam answer: B — Software-Defined Perimeter (SDP) with dynamic policy enforcement. SDP enforces per-device least-privilege access dynamically, which is the core requirement of Zero Trust for IoT.

Key Exam Takeaways

  • ARO = ALE ÷ SLE. Eliminate wrong-ARO answers first, then apply cost-benefit logic to choose between mitigation and acceptance.
  • Mitigate when mitigation cost < ALE; accept when mitigation cost > ALE. This is the quantitative threshold the exam tests repeatedly.
  • GDPR Article 22 requires transparency in automated decision-making. A Privacy Impact Assessment with explainability metrics is the correct strategic control — training and auditing are supporting, not corrective.
  • Post-quantum cryptography (PQC) is required for long-retention sensitive data. CRYSTALS-Kyber (NIST FIPS 203) and CRYSTALS-Dilithium are the NIST-standardized PQC algorithms to know.
  • SSDs cannot be reliably sanitized by overwrite or degaussing. Cryptographic erasure (NIST SP 800-88) is correct for reuse; physical destruction for permanent disposal.
  • Zero Trust least privilege for IoT = Software-Defined Perimeter. NGFWs, TPMs, and API gateways are valuable components but do not provide device-specific dynamic policy enforcement.
  • SBOM (Software Bill of Materials) is the NIST SP 800-161-aligned control for firmware supply chain risk in IoT environments — know it for both exam questions and real-world vendor risk assessments.

FAQ: CISSP Exam Questions Answered

How do I calculate ARO on the CISSP exam if I only know ALE and SLE?

Divide ALE by SLE: ARO = ALE ÷ SLE. If ALE is $4,000,000 and SLE is $800,000, the ARO is 5 — meaning the event is expected to occur five times per year. Memorize the triangle: any one of ALE, SLE, or ARO can be solved when you know the other two.

What is the difference between accepting and mitigating risk in a CISSP quantitative risk question?

Acceptance is appropriate when the cost of mitigation exceeds the expected loss (ALE). If a control costs $600,000 annually but the ALE is only $300,000, acceptance is financially rational. Mitigation is justified when the control cost is less than the ALE — you're spending less than you stand to lose.

What NIST post-quantum cryptography algorithms do I need to know for the CISSP exam?

Focus on the algorithms NIST finalized in 2024: CRYSTALS-Kyber (FIPS 203, for key encapsulation/encryption) and CRYSTALS-Dilithium (FIPS 204, for digital signatures). The exam will test the concept that current algorithms like AES and RSA are vulnerable to quantum attacks over time, and that PQC algorithms are designed to resist both classical and quantum adversaries.

Why doesn't degaussing work on SSDs for CISSP sanitization questions?

Degaussing disrupts magnetic fields to destroy data on magnetic media (HDDs, magnetic tape). SSDs store data in NAND flash memory cells, which are not magnetic — degaussing has no effect on them. For the CISSP exam, remember: overwrite and degaussing = HDD/magnetic media; cryptographic erasure = SSD reuse; physical destruction = SSD permanent disposal.

What is a Software-Defined Perimeter (SDP) and why does it enforce least privilege better than a firewall?

An SDP is a Zero Trust network access framework that requires each device or user to authenticate and receive a dynamically scoped, minimal set of network permissions before any connection is allowed — even to internal resources. Unlike a firewall, which enforces rules based on network segments or IP ranges, an SDP enforces identity-based, context-aware policies for each individual session. For IoT environments where devices have diverse roles, SDP provides the granularity that perimeter-based firewalls cannot.

Ready to Pass the CISSP Exam the First Time?

Head over to CISSPCyberTraining.com for free CISSP practice questions, a 250-question final quiz, and a full library of domain-specific content. Whether you need free study resources, an affordable question bank, or full coaching support, CISSP Cyber Training has a tier built for where you are in your journey. If you already hold your CISSP, the platform also tracks your CPE credits — so you never risk losing the certification you worked hard to earn.

 

 

 

TRANSCRIPT

SPEAKER_01  

Good morning, everybody. It's Sean Gerber with CISSP Cyber Training, and hope you all are having a beautifully blessed day today. Today is Thursday, and we are going to be going over some CISSP questions on a deep dive related to the CISSP exam. And this is what we do on Thursdays. On Mondays, we talk about topics. On Thursdays, we get into CISSP questions. And that's what we're going to do today. But before we do, I had an article that I really wanted to bring up to you all because we talk about this a lot in CISSP cyber training, the importance of what you guys have and the skills you have as cybersecurity professionals on what you can do to make a difference within your community. I'm going to just stress this you guys have the ability to make significant positive change in our communities and in our world because of the skills that you have. And right now, people are just struggling to understand all of this mess. And I think it's imperative that you guys do something about it. So this is just a quick article. I'm going to show you about that. So this article is from Wired magazine, and it says the deep fake nudes crisis in schools is much worse than you thought. Now, Wired said they indicated they found nearly 90 schools and 600 students around the world impacted by AI-generated deepfake nude images. So, and this is a problem, right? So I have uh seven children, four of which are females, and I love my daughters to death. And I would say that they came up, I the blessing is they grew up in a world where this was not quite there yet. On the flip side, my granddaughters, who are just getting started in this world as they are, um, are going to deal with this. And this is a terrible thing that you all need to be able to help schools understand what's actually going on and help them with this situation. So here are the key points around what's in this article. And I highly recommend you go check it out, read it, and then start kind of maybe mulling over what you can do as a professional. I've got some at the end of this, I've got about seven key points that I'm going to bring up that I think you guys could really use is based on the role that you're currently in. So they're having a rapid growth of the air quotes nudify deep fake tools. And these tools basically generate fake images of nude nudes of individuals. So you basically take a picture of like my daughter, they would then if the AI would take that picture of her from social platforms, and then it creates a nude or inappropriate pictures of her. And so it's just terrible, right? So now it used to be where you, if people had those pictures, right? Let's not say children, but let's just say of adults, those people kept them off to the side. Now you don't even need the ability to have pictures that you took at one point. They are all can be generated up as something that doesn't even exist. But once it's out there, it doesn't matter whether it existed or not. It's it's air quotes real. So this is a large and expanding ecosystem. There are dozens of bots of websites and services out there offering this capability, and they do it through subscription or pay per image situations. Now, unfortunately, pornography is a huge problem with many different people throughout the world, and now this is only getting worse, and it's going down a path that is just it's unsustainable and it's just terrible. It hurts so many people. So basically, Telegram and similar platforms play a major role in this. Telegram hosts bots that automatically generate explicit deepfakes, and they found 50 plus bots with millions of combined users producing such content, right? So this is as soon as these bots are taken down, new ones show up. It's just it's a very big problem that it's really are overwhelmingly affecting women and girls. So this comes down to the social media stuff that your people, your kids are putting out there, and images are being taken of them and then are put out into the public domain. Now, this is frequently used against in students, influencers, politicians, and celebrities. Well, why is that? Because of the fact that these folks are always in the news, they're always somewhere within the overall social ecosystem, so therefore they're an easy target by these people. Now, most of these targets are non-consenting individuals, and that's, I mean, realistically, how many of you would want to have this happen to you? I would say the number's probably in the point zero one percent. Obviously, somebody out there might like that, but the majority of people are not asking for this, and this is what's currently happening. So they have students that are creating explicit images of classmates, they have AI-powered sexual harassment aspects of it, and all of these pieces are put in place for humiliation, harassment, and psychological trauma. The goal is just to create some level of power over another individual, and it's just it's becoming a big problem. Now, there's financial incentives to fuel this industry. Many sites will sell credits or tokens, and they overall it's generating millions of dollars annually for these types of organizations. Uh, so there's legal and regulatory responses are coming, unfortunately, because of so many things within the legal space, they're lagging behind. They're lagging behind because one, the legal system is was designed for something that is not this instantaneous and fast. And two, I would also be willing to say that many of the people in the legal world do not truly understand the risk to our children and to our young ladies. Uh, it's just not good. And so, therefore, it's it's just falling behind. And there's other competing priorities, unfortunately. So, the one thing that why people are so concerned is ease of use, right? Anybody with a smartphone, and now with Google Glasses or Apple Glasses, whatever's coming out, can take deep fakes. They can create them in minutes and they can take pictures pretty much anywhere. Millions of users and automated bots allow this abuse to spread globally. So it's not just people affecting within the United States. It talks about different organizations or different schools around the globe that are having this problem. It causes psychological harm and which in turn, many cases turns into physical harm that these poor people are doing to themselves because of they just don't know what to do. So it's just it's not good. And the last thing is it's creating a normalization of digital sexual abuse, and that is just not appropriate. So it's a big, big deal. And so I say this because if one, I have children that can be affected by this. If you're listening to this, you have someone in your life that could be affected by this, whether it's a child, whether it's a niece, a nephew, it doesn't matter. Someone in your life could be affected by this, and it's just terrible. So, what I want you to do is that we're gonna kind of go into the next phase of this podcast as far as this article specifically, are what are some things you can do as a cybersecurity leader to help this situation? So, if you're a CISO or a security leader, you can reach out to local school districts and offer free threat briefings for your their IT staff and the administration. I've did this for years when I was a hacker working for the government. I would do this to reach local school districts and then do briefings and threat, free threat things for them. Uh, you volunteer to help schools draft or review an acceptable use or AI governance policies. I think that's an important part. They don't understand this. I get a lot of nonprofits that reach out, they don't understand what they should do from a policy standpoint. Most IT departments are understaffed. A 60-minute briefing from a working CISO carries an enormous weight to help them. Now, again, just coming to them with solutions is great, but you need to help them understand how they can make these changes in effect. So, you as a security leader, you can do this right now. You can carve out a little bit of time out of your life, and you can end up putting this into school districts. Now, I also say that, and this comes back to something I'm offered with the CISSP cyber training, you can also get CPE credits for things like this. So it's an important part for you giving back to the community. And I tell you this, I used to think it was kind of bunk. I'm like, yeah, I'll give back to the community. But now, as I'm seeing this more as the older I get, and two, I as this is becoming more proliferated from the security standpoint, there's a lot that you can do as a security professional. So if you do deal with security awareness, develop and donate free deep fake awareness curriculum for K-12 audiences, right? So something you could do. You could even put this online and help people with this. Ensure your curriculum covers three things what these tools are, how little it takes to become a target, and what is what to do if it happens to you. The you can use ISC Square that has community uh education programs as models, and so does SANS. So all of that stuff is available to you out there as well. If you're in the threat intelligence and research side, document and report active nudify infrastructure, bots, websites, telegram channels, all of those things out there. Report the findings to the National Center for Missing and Exploited Children. They have a cyber tip line. There's another way you can do that. And then treat this the same way as a community hunts ransomware infrastructure. If you have that capability, this is something you could do on the side to add value to the world. Again, I know it's you're like feel like you're in a boat that's got a million holes in it and you're sinking. But if you can plug a couple holes and everybody plugs a couple holes, we can at least make some level of impact here. If you're in the vendor risk or GRC side, uh add synthetic media controls to your vendor questionnaires, right? Advocate internally for AI governance frameworks that include harm vector analysis, not just bias and accuracy. And then use your organization's buying power to create market pressure on vendors to enforce them platform misuse policies. If you're a developer, you can look at contributing to the Coalition for Content Prevention and Authenticity, that's C2PA. Or you can also contribute to open source deep fake detection tools. Can you help in that space? As a security analyst, you can offer or present local school board meetings or PTA events. I mean, honestly, a 20-minute talk from you uh walking them through this situation is it can be extremely valuable. Uh now I will tell you that if you do offer your assistance, expect to get more people asking you for it. So you're gonna have to gauge that. You're gonna have to have the ability to basically keep that under a little bit of control, but you're gonna want to make sure that you that you can provide these kind of skills for them. Uh, you want to write up findings on a public threat intelligence report so that schools and nonprofits can actually use. Again, not just enterprise-focused audiences. Again, so there's some things you can do as a security analyst. So here are some actionable steps that I feel are really important that you can do right now as a security professional to help this problem. One is create incident response, right? Be on call or be walk them through how to develop an incident response process and checklist to deal with this. This includes covering notifications, evidence collection, platform takedowns, all of those pieces can be an important part. It also can be as simple as just creating a script on how do they handle it from the staffing standpoint. What can they do to help in this situation? This is a great way for you to give back to the community. It truly is. Help them with policy development, determine AI acceptable use policies, help them with understanding how the gaps that they currently have and what they may want to put in place. Um, add mandatory reporting language so that staff know when to escalate. Again, that comes back to the incident response piece. All of those pieces can really be a valuable piece to them. We talked about training and awareness with delivering training for the uh the staff related to Nudify. Also create age-appropriate student awareness models for middle and high school students. Again, I gotta tell you, you gotta be careful in this space, and this is where you have to work with the schools because it has to be age appropriate, and you have to make sure that you're not telling kids more than what they what they need to know. The other part is when you deal with that, is when you start, kids are inquisitive, when you start bringing this up, they start looking for it. So there's gonna be a bit of a challenge with that, but that's why you have to work with the staff and see what they're willing to help you do. Uh, you may want to run tabletop exercises with IT staff and help them go through live deep fake deep fake incidents as well. So just kind of lurk, look through how are different ways that you can go and help out the community. And we talk about this at CISSP Cyber Training, and also I've getting in with your ISC Squared CISSP chapter, see if there's things that you can do and maybe bring that forward during some of the meetings that they have on what are some things that can be happening out there. I will tell you that if you are if this is something that you're trying to get some exposure for yourself, this is be a great way for you to do it. And I'm kind of driving this home. I feel very, very strongly this is a problem. It's a huge problem for so many people. And there's people out there that look at these things, they need help. There's people out there that put these things out there, they need help. There's this, it's caused something to be so dramatic that can affect so very many people. So again, I'll stop right there. I just want to tell you, as a cybersecurity professional, you can make an impact. You can make a difference. Uh, so go check out this Wired magazine, the Deep Fake Nudes Crisis in Schools is much worse than you ever thought. All right, so let's get into the questions we're gonna be talking about today. Okay, so these are the questions that we're gonna be getting into today. On you can get all of these questions at CISSP Cyber Training. I wanted to let you know I put out a brand new 250 page or 50 page. Oh my gosh, that would be terrible. 250 question final quiz that's available to you. Just go out there to CISSP Cyber Training, sign up for my free CISSP questions. I'm actually gonna be putting out a link here in the near future for this questions, the final quiz if you just want that. But it's a 250 question quiz specifically for you. The ultimate goal is to get you everything you need to pass the CISSP exam the first time. And I'm putting as much stuff out there as I possibly can. At CISSP Cyber Training, we have free content, we have inexpensive content, and then we have a more of a robust coaching type of program available for you as well. And that's all coming in the near future. Okay, so let's go through question number one. Question one: the global organization faces increased risk from state-sponsored supply chain attacks, targeting firmware in their IoT devices. After a quantitative risk assessment reveals the annualized loss expectancy, ALE, of$4 million with a single loss expectancy, SLE, of eight hundred thousand dollars. What is the annual rate of occurrence and which response is most appropriate if mitigating costs are five hundred thousand dollars annually? Okay, so there's a lot in here. So we're talking about ALE, we're talking about SLE, we're talking about ARO. These are some of the big questions that people struggle with on the CISSP exam. So let's kind of walk through the different answers. ARO equals five, mitigate by implementing software bill of materials validation. B ARO is eight, transfer the risk via cyber insurance. C uh ARO is four, avoid by discontinuing IoT deployments, or D, ARO five, accept and monitor with threat intelligence. Okay, so we're gonna be talking about when you get the CISSP exam, you're going first off, if you don't know what the answer is, there's some things you can do to potentially help you. Uh one of the things is that if you see two answers that are very similar, that could be maybe a leading you down the path of where it's at. So let's talk about the ones that are not correct. Well, let's just kind of walk through the first before they do that, let's walk through what is the uh ALE. So ALE equals SLE times ARO. So if we know ALE, which is in this case, it's four million dollars, and we know the SLE is eight hundred thousand dollars. So then when you do this simple algebraic math, you basically have four million dollars divided by eight, a hundred thousand dollars. What does that give you? Well, that gives you five, right? So you're gonna need to know that that's what the case is. So ALE equals SLE times ARO. So you know that gives you the five. So right there, you can probably throw out the other two that aren't five, right? ARO is eight, nope, throw it out. ARO is four, nope, that's not it. Throw it out. So now you're down to two. ARO five for both A and for D. So A is mitigate by implementing software build of materials validation. So software bill of a bill of materials, okay, that is a ensures that transparency in the firmware components and it reduces your supply chain risk per the NIST$800-161. So a$500,000 mitigation cost is justified as it is less than the ALE, because your ALE in this specific case was$4 million. So the cost is half a million dollars. So you go with the attitude that, well, if it's a half a million dollars, then we could potentially and the the cost of making that change is a half a million dollars and the product is a half million dollars. You turn around, you can then say, well, I can go ahead and put the ARO five and I can mitigate by implementing what I wanted to implement at the cost of a half a million dollars annually. So it's it's a tough question. Now, ARO5, except and monitor with threat intelligence. Now, this where this one would be potentially valid is if your ALE was around, say,$300,000. So, and again, it would all change your numbers a bit in here, but let's just say it was below what over your ARO cost is going to be. If it's below then that, then you don't want to spend that half a million dollars on a$300,000 uh ARO. So, therefore, what you would do is you would say, I would accept and monitor the threat intelligence on this. And so that's where you're gonna have to work with your senior leaders to determine what is best for you and your organization. But again, if you're coming in at your ALE, is at$4 million, your annual loss expectancy is$4 million, and you're gonna implement a half a million dollar thing annually, it is worth it for that$4 million single loss expectancy because it will mitigate the risk for it. All right, so let's move on to question two. During a GDPR compliance review, a company discovers that its AI-driven customer profiling violates transparency requirements under Article 22, which is the governance which governance control best addresses this to align with ethical AI principles. Again, DR during GDPR compliance review, a company discovers its AI-driven customer profiling violates transparency requirements under Article 22. Which governance control best addresses this to align with ethical AI principles? A update the data retention policy to limit AI data storage. B conduct mandatory AI ethics training for developers. C outsource AI compliance to third-party auditors, or D. Implement a privacy impact statement with explainability metrics. Okay, so when we're coming and looking at this document, or this question, I should say, it's saying that there is a AI-driven customer profiling, okay, that violates transparency. So they're watching something in the AI is watching the customer, and there is a transparency requirement under Article 22 that they must go out and address. So in this case here, Article 22 requires transparency and automated decision making, right? So that's what Article 22 calls out. So let's think about that when we're talking about these follow-on questions that are not or the answers that are not correct. A, update the data retention policy to limit AI data storage. Okay, so that doesn't really deal with the privacy aspects of this. It doesn't deal with the automated decision making of it. It's just with about the data storage. So that would not be the correct answer. B, conduct mandatory AI ethics trainings for developers. Okay, so this will help the developers, but it does not address the article 22. It's related for transparency and decision making. Now it will help the developers in going, whatever you create, you need to keep article 22 in mind, but that doesn't fix the overall larger problem that you have because it discovered it. And then C the outsourcing and AI compliance to a third-party auditor. Okay, so if you're dealing with compliance to an auditor, that that's great. I don't see how that would be valuable, but let's just say you that is something you want to do. That still does not fix the overall problem you're having to deal with. So you'll want to develop and implement a privacy impact assessment. And the part of that, the part of that is that it's going to have explainability metrics built into it. So you need to be able to do an assessment to find out what's exactly going on, and you need to have metrics as well to understand what is actually good, what are the metrics we can start tracking in the future. So this will directly assess address the profiling issues, the integrated risk assessment, and the regulatory compliance specifically. You have done something for it. So again, keep this in mind. If you have a compliance issue, you have to deal with it. You have to address it. Dealing with it on a tactical basis is fine, but overall it has to must be addressed from a strategic standpoint as well, not just a tactical standpoint. So the correct answer is D implement a privacy impact assessment with explainability metrics. Okay, next question. Next question. The organization classifies quantum research data as top secret with a 15-year retention requirement. To protect against quantum decryption threats, which storage method is the most secure? Okay, so we can talk about this question. They are wanting to they have intellectual property because they have quantum research data and they're labeling it as top secret with a 15-year retention before it can be deleted. So what they're concerned about in this question is the fact that as you're transmitting data from one place to another, it potentially could get copied. Someone could get in your organization and steal all the information, and it could be copied and put someplace else. Now, in today's encryption algorithms, they cannot decrypt this. So the thought is that if you have quantum in the future, it will be able to decrypt it. So what are we going to do to make sure that it's protected if it's being stored for 15 years? A encrypt with AES 256 and store in a FIPS 140 3 validated HSM. Okay, so your hardware security module. So that's basically putting in encryption that's about today that we have today. A with a put it in a system that is uh strong and encrypted. Okay. B, use a NIST-approved post-quantum algorithmics like kyber. C, apply data masking and tokenization for archival. Or D implement sharding across geographically dispersed clouds. I can't speak. It's too early in the morning. Sorry, guys. So we're going to be dealing with uh let's go through the questions, or should they the answers that are incorrect. So implement sharding across geographically dispersed clouds. I will tell you that when I first read sharding, I thought of something very different, and I'll leave that up to you all to determine what that is. But implementing sharding across geographically dispersed dispersed clouds. So you're basically spreading it up around clouds in various locations. So it's uh more or less your data, um, what do they call that? Exposure has been transferred to multiple places. So that is not correct. That's not gonna really help you with a uh against quantum decryption, just not gonna do it. Apply data masking and tokenization for archival. Okay, well, so that will help in the near term, but it's not gonna help with something in the long term if your data has been uh compromised from a quantum standpoint. So that's a great first step, but it is not something for long-term use for quantum technology. And then encrypt with AES 256 and store in a FIPS 140-3 validated HSM. So we know AES FIT 256 is the current model that's in place for the encryption algorithm that's being used, one of many that are being used. So that would not be helpful when it comes to relating to the quantum decryption. And then FIPS 140, we know that that is an encrypted container that is being that stores it in an encrypted manner. But if you are that's in today's uh technology, that's wonderful, but that does not deal with the post-quantum aspects that are going on in the next 15 years. That's what they're expecting. So the answer would be use a NIST-approved post-quantum algorithms like kyber. So quantum decryption threatens AES over the long haul. We know this is the case, and NIST postum quantum algorithms such as crystals-Kyber uh will help with that as well. So that's the important point of all of this is that you plan on having a good uh strategy related to your quantum encryption in the future. Next question To comply with CCPA's data minimization principle, a company must securely dispose of obsolete customer records on software storage devices. Which sanitization method ensures no residual data remains? Okay, so to comply with CCPA's data minimization principle, you must dispose of anything that's on an SSD. Which sanitization method ensures no residual data remains? A multipassed overwrite with DOD 5220.22-m standards. C Dgaussing following the sanitization verification. Cryptographic erasure with secure key destruction or D factory reset with secure boot reinitialization. Okay, so let's go through the ones that are not correct. So multipass override with DD522.22-m standards. What is that? Well, okay, so we know that the multipass actually works really well with physical hardware or hard drives. Uh it does that multi-pass standard works wonderful for it. Um, and it does take, especially now the size of these drives, it does take forever. But when you're dealing with SS drives, right, your solid states drives, those are ones that you can't really do that with. It doesn't really work. Uh so the best thing to do in that situation would be a hammer. Now, a hammer's not one of the answers, so we can't answer it with a hammer. But so A does not work. Dgousing following the sanitization verification, again, based on hardware devices that have a platter and are magnetized. Yes, degaus degousing would work great, but in the case of a solid state drive, they are not. I I apologize, I said software. I it's these acronyms goof me up, but it is a solid state drive. Sorry. So the solid state drives, uh, these will not work with degousing. Or do a factory reset with a secure boot reinitialization. Okay, factory reset just resets the software that's inside, and even if you do a secure boot re-initialization, the data still resides on the SSD. So the correct answer outside of having a hammer and destroying it or throwing it into a shredder is the cryptographic erasure with secure key destruction. So SSDs, they have where leveling renders overwrites unreliable. So we talked about that, and that's not something they can do. So cryptographic erasure encrypts data and destroys the key, basically making the data unrecoverable and you can't get it back. Uh so this is under NIST 800-88, and this will meet the requirements for that. Now, I will say that if that's if you want to repurpose this SSD. If you do not wish to repurpose the SSD, have a fun day and get out a hammer and just beat the dickens out of it. That will work really well too. Uh so you just have to decide what you want to do with it. All right, last question. Designing a zero trust architecture for a 5G network, which component best enforces least privilege for IoT device communications? So designing a zero trust architecture for 5G edge networks, which component best enforces least privilege for IoT communications? A next generation firewall with deep packet inspection. C, software defined perimeter with dynamic policy enforcement. C, trusted platform module uh or TPM for device attestation, or D API gateway with an OAuth 2.0 authorization. Yeah, it's early. I'm sorry. I apologize. My my brain and my talking are not always working. All right, so let's go through the ones that are not correct. A next generation firewall with deep packet inspection. Okay, so if you're looking for zero trust architecture, that is an important part of having a next generation firewall in there, right? But it doesn't best enforce least privilege for IoT. So it will be a very big help in this situation, but it does not enforce least privilege. TPMs, a trusted platform module for device attestation. So having a TPM, and there are IoT devices that do incorporate TPMs, not all of them do, but some of them do have TPM built into that, is a great piece of this for device attestation, but it will not help you when it comes to all of your least privileges aspects of all device communications. So it doesn't best enforce that. And then an API gateway with OAuth 2.0 authorization, API gateway, great way to again limit capabilities and ability to get into the IoT device. You can put some strict controls into your API gateway. However, it doesn't, it's not the best answer to enforce least privilege for IoT device communications. So the best answer is software divine perimeter, SDP, with a dynamic policy enforcement. So SDP enforces least privilege in zero trust by authenticating and authorizing all IoT devices dynamically. So that's the key factor there. It's doing it on the fly. And this is an important part. It gives you much more granular control and it's ideal for 5G networks, unlike firewalls or API gateway, which lacks device specific policies. So again, they're not bad to have, right? Next generation policies and uh next generation firewalls and APIs are good to have depending on the network you're using. But when we're coming back to a 5G edge network, the best one is a software divine perimeter with dynamic policy enforcement. Okay, so that's all I have for you today. Uh hey, head on over to CISSPcybertraining.com. I've got some great things out there. One thing I forgot to mention. If you are a cybersecurity professional that already has your CISSP, you can download all of this content. I'm building this out as it goes, but you can now do your CPEs. I know CPEs are a pain. They're not a lot of fun, but you gotta do them. You worked your tail off to get your CISSP. Don't lose the CISSP because you didn't do your CPEs. But by listening to CISSP Cyber Training, I'm giving you the documentation you need to be able to do that. It's gonna be something that's coming out here very soon. Um, it's gonna be part of one of my basic tier packages. And I mean it. It's gonna be great. You listen to the podcast, all you got to do is click, click, click, uh, copy, paste, and it is done. And you've got a CPE for yourself. So that way, you don't have to always be fighting. Well, I gotta go find out all the information I need for my CPEs. So pretty cool. Head on over to CISSP Cyber Training, catch it out. There's lots of great stuff there for you. And I hope you all have a beautifully blessed day, and we'll catch you on the flip side. See ya. Thanks so much for joining me today on my podcast. If you like what you heard, please leave a review on iTunes as I would greatly appreciate your feedback. Also, check out my videos that are on YouTube, and just head to my channel at CISSP Cyber Training, and you will find a plethora or iconicopia of content to help you pass the CISSP exam the first time. Lastly, head to CISSP Cyber Training and sign up for 360 free CISSP questions to help you in your CISSP journey. Thanks again for listening.

CISSP Cyber Training Academy Program!

Are you an ambitious Cybersecurity or IT professional who wants to take your career to a whole new level by achieving the CISSP Certification? 

Let CISSP Cyber Training help you pass the CISSP Test the first time!

LEARN MORE | START TODAY!