Duration: 50-65 minutes | Target: Technical Teams
This training was built for those of us who work directly with sensitive patient data: the developers, engineers, analysts, and operators who design, ship, secure, and support the systems behind care delivery. You will learn how to recognize, protect, and responsibly handle PHI and PII in real technical workflows, so the work we build remains safe, trusted, and worthy of the people it serves.
Designed for exploration. Review material, change answers, and build confidence at your own pace. Perfect for first-time learners or refresher training.
Test your understanding with no revisions. Completing the assessment generates a printable certificate for your records or compliance documentation.
Your name will appear on your completion certificate.
You can change this anytime by clicking your name on the certificate.
Duration: 12-15 minutes
Personally Identifiable Information (PII) is any data that could reasonably identify a specific individual. Think of it as data that could be used to "pick someone out of a crowd."
Protected Health Information (PHI) is PII that exists in a healthcare context.
For each data element, select whether it's PHI, PII, Both, or Neither.
Here's a fundamental principle that will help you in every situation:
Why this matters for developers: You can work with health data safely as long as it's truly de-identified and contains no PII. The risk comes when identifiable information gets combined with health context.
Duration: 15-18 minutes
For each scenario, identify if PHI exposure has occurred and select the correct action.
ERROR: Payment failed for [email protected] - insulin prescription ID 789[email protected],diabetes,insulin,2024-01-15// TODO: Replace hardcoded connection mysql://user:[email protected]/patient_recordsDuration: 20-25 minutes | Advanced Technical Scenarios
Use buttons below OR scroll to bottom for Next/Previous buttons
Data that seems safe individually can become PHI when combined with other information.
Reality for builders: Your database schema and API design decisions directly determine whether PHI is created, how it flows through your system, and where it gets exposed. Well-intentioned architectural choices - convenient table joins, comprehensive API responses, flexible GraphQL queries - can inadvertently create PHI exposure points.
The Setup: Single users table with all information
Why This is Dangerous:
Architectural Benefits:
The Setup: Normalized schema with foreign keys
Common Scenarios That Create PHI:
The Setup: Single endpoint returns everything about a user
Cascading Problems:
Architectural Benefits:
The Setup: Flexible GraphQL API allowing arbitrary queries
GraphQL-Specific Risks:
The Setup: API with flexible filtering and pagination
Pagination-Specific Risks:
Keep user contact info (names, emails) in different tables from medical data (diagnoses, prescriptions).
Why it matters:When separated, you reduce the chance of accidentally creating PHI. A query against just the contact table won't expose health data.
Example:users table vs medical_records table instead of one big patient_data table.
If your database schema requires joining PII+health tables just to get basic info, you're creating PHI constantly.
Alternative:Use hashed IDs at the application layer so the DB doesn't know the direct relationship.
Example:Instead of SELECT users.name, visits.diagnosis FROM users JOIN visits, your app uses a hash to lookup separately.
Audit your most common queries - are developers routinely joining contact info with medical data?
Risk:Every time this happens, PHI flows through your app, logs, caches, etc.
Action:Look for JOIN patterns between PII and health data tables in your codebase.
ORMs (like Hibernate, Entity Framework, Sequelize) often fetch ALL columns by default.
Risk:Developer wants just an email address, but ORM pulls diagnosis codes too.
Fix:Configure explicit column selection and lazy loading to only fetch what's needed.
Database indexes can show up in query plans, performance logs, and cache layers.
Risk:Index on diagnosis_code field → logs show which diagnoses are being searched.
Sometimes necessary for performance, but be aware indexes expose data in monitoring tools.
Many DBs log slow queries, error queries, or all queries for debugging.
Risk:Log shows WHERE patient_name='John Smith' AND diagnosis='HIV'
Sanitize query logs, use parameterized queries, restrict log access.
If separated, you can keep contact info for 7 years but medical data for 10 (or whatever your retention policy requires).
Why it matters:HIPAA has minimum retention requirements; separating data types gives you flexibility.
Bonus:Makes it easier to respond to "right to be forgotten" requests.
Does /api/patient/123 return {name: "Jane", diagnosis: "diabetes"} in one response?
Any consumer of that endpoint sees PHI, even if they only needed the name.
Better:Separate endpoints or field selection.
Different endpoints for different data types.
Benefits:GraphQL-style field selection or REST parameter like ?fields=name,email
Frontend only needs to show appointment time? Don't send diagnosis codes.
Reduces:PHI flowing to browser, client logs, network captures.
Maybe all staff can see contact info, but only providers see diagnoses.
HIPAA angle:Minimum necessary principle - limit access to only what's needed for job function.
Implementation:Different API scopes/permissions for different endpoint groups.
Allow more calls to /api/contact than /api/diagnoses
Makes bulk PHI extraction harder, makes scraping attempts more visible.
Security depth:Defense in depth against compromised credentials.
PHI should rarely be cached; if it is, use short TTL and encryption.
API gateway logs full request/response for debugging.
Result:Logs full of {"patient": "John", "diagnosis": "cancer"}
Sanitize logs, use correlation IDs instead of actual data, log only metadata.
"Error: Patient John Smith's diagnosis of HIV cannot be updated"
"Error: Unable to update record ID abc123. Reference code: ERR-2938"
Error messages shouldn't echo back sensitive data.
Swagger docs show "patient_name": "Sarah Johnson" with real social security numbers from testing.
Obvious fake data like "patient_name": "Test Patient" or "ssn": "000-00-0000"
Docs get shared, indexed, cached - don't want real PHI there.
v2 API has proper PHI controls, but v1 is still running and returns PHI in logs.
Risk:Attackers/auditors find old version with weaker security.
Fix:Deprecate and sunset old versions, or retrofit security controls.
`GET /api/v1/users/{hash}/activity` returns: `{"userHash": "abc123...", "sessionCount": 47, "avgSessionMinutes": 8.5, "lastActiveDate": "2025-10-15"}` - PHI?`GET /api/v1/analytics/regional-health` returns: `{"region": "northeast", "avgMetric": 72.5, "userCount": 1847, "trend": "improving"}` - PHI?`GET /api/v1/patients/{id}/dashboard` returns: `{"email": "[email protected]", "upcomingVisits": [{"date": "2025-11-20", "type": "Cardiac Rehabilitation", "provider": "Dr. Smith"}], "activePrescriptions": 3}` - PHI?Reality for builders: Application logs, error tracking, APM tools, and observability platforms are where PHI exposure happens most frequently - and most silently. You're debugging, optimizing performance, tracking errors... and accidentally logging PHI to systems without BAAs.
The Setup: Developer debugging API issues in production
Why This is Dangerous:
The Setup: ORM or database client with query logging enabled
Critical Points:
The Setup: Application Performance Monitoring with automatic tracing
APM-Specific Risks:
The Setup: Error monitoring (Sentry, Rollbar, Bugsnag, Airbrake)
Error Tracking Risks:
The Setup: Centralized logging (Splunk, Elasticsearch, Datadog Logs, CloudWatch Insights)
Aggregation Risks:
Debug statements often log entire objects "temporarily" during development and get forgotten. These are PHI time bombs.
What to search for:logger.debug( or console.log( or print(JSON.stringify(req.body) or str(user_obj)query.results, db.rows, api_response✅ Good: logger.info('User login', {userId: hashId(user.id)})
❌ Bad: console.log('Debug user:', user)
Many ORMs (Sequelize, Hibernate, Entity Framework) log ALL queries by default in development mode. Developers forget to disable this for production.
What gets exposed:WHERE patient_name='John' AND diagnosis='HIV'INSERT INTO prescriptions (patient_id, drug, dosage) VALUES...Disable query logging in production, or configure to log only query structure (no parameters). Use parameterized queries always.
Consequence if missed:Every database query with PHI is written to logs, often retained for months. This is a breach waiting to be discovered.
APM tools (Application Performance Monitoring) are designed to capture EVERYTHING by default to help with debugging. This is dangerous in healthcare.
Default capture includes:New Relic by default captures full request bodies. If someone POSTs patient diagnosis data, it's in New Relic's servers. Without BAA = HIPAA violation.
Error tracking tools (Sentry, Rollbar, Bugsnag) are built to send as much context as possible to help debug. This often includes PHI.
Common PHI exposures:req.body attached to errors (contains patient form data)Session replay records everything: every click, every form field, every page view. If your app shows diagnoses, prescriptions, or patient names, it's ALL recorded and sent to the error tracking vendor.
How to fix:Configure scrubbing rules, disable session replay, send only error messages (not full context), use hashed identifiers only.
Any vendor that could potentially access PHI (even in logs) must sign a Business Associate Agreement (BAA) with you. Without BAA = automatic HIPAA violation.
Common mistakes:Go to vendor's website and search "BAA" or "HIPAA compliance". Most enterprise vendors have a self-service BAA signing process. If they don't offer BAAs, you CANNOT use them for any data that might contain PHI.
Example gotcha:AWS signs BAA, but it only covers specific services. S3 (yes), but CloudWatch Logs requires configuration. Read the fine print.
HIPAA requires you to retain certain records but also to dispose of PHI when no longer needed. Keeping logs forever = compliance problem.
Common scenarios:"Show me your log retention policy and prove it's being enforced." Can you?
HIPAA requires limiting access to PHI to only what's needed for someone's job. This applies to logs too.
Common violations:If you can't list everyone with log access right now, you have a compliance problem.
Developer exports logs to CSV for analysis, saves to Downloads folder, laptop gets stolen = breach notification to thousands of patients + regulatory investigation.
Why this happens:Provide analysis tools IN the logging platform (queries, dashboards, alerts) so exports aren't needed.
| Tool Category | Examples | BAA Availability |
|---|---|---|
| CSP Native Logs | CloudWatch (AWS), Cloud Logging (GCP), Azure Monitor | ✅ Typically covered by CSP BAA, but verify specific services and configuration requirements |
| APM Platforms | Datadog, New Relic, AppDynamics, Dynatrace | ⚠️ Enterprise tier only, with configuration requirements (disable body capture, etc.) |
| Log Aggregation | Splunk, Elasticsearch, Datadog Logs, Sumo Logic | ⚠️ Enterprise tier typically, verify on-premises vs cloud deployments |
| Error Tracking | Sentry, Rollbar, Bugsnag, Airbrake | ⚠️ Some offer BAAs at enterprise tier, many do NOT |
| Session Replay | LogRocket, FullStory, Hotjar, Heap | ❌ Most do NOT offer BAAs or HIPAA compliance - avoid with PHI |
Golden Rule: Assume NO BAA coverage unless you've explicitly verified it in writing with your vendor account team and confirmed it covers your specific use case and plan tier.
`{"timestamp": "2025-10-19T14:23:15Z", "level": "INFO", "message": "Database query completed", "table": "user_preferences", "duration_ms": 45, "request_id": "req_abc123"}` - PHI?`POST /api/prescriptions - User: [email protected] - Body: {"medication": "Lisinopril", "dosage": "10mg", "diagnosis": "Hypertension"} - Response: 201 Created` - PHI?`[2025-10-19 14:23:15] SLOW QUERY (2.3s): SELECT * FROM appointments WHERE appointment_date > '2025-10-01' AND status = 'completed' LIMIT 100` - PHI?Critical insight for builders: Even when you never explicitly store diagnosis codes or medical conditions, user behavior patterns can reveal health information. This creates "inferential PHI" - and you're still liable under HIPAA.
The Setup: Health app with educational content about various conditions
Why This is PHI:
The Setup: Wellness app with various health tracking features
The Inference Chain:
The Setup: Mental health app with mood tracking and therapy scheduling
Critical Reality:
The Setup: Health content platform with ML-powered recommendations
ML/AI Specific Risks:
The Setup: Testing new UI for medication reminders
Experimentation Risks:
If you don't know what's PII, you can't protect it. Many developers think "user_id=12345" is anonymous, but if it maps to an email/name in another table, it's PII.
What counts as PII:"We use hashed user IDs in analytics, so it's anonymous!" → But if marketing can join that hash back to the CRM, it's NOT anonymous.
Action:Map all data flows: Can any analytics ID be traced back to a real person? If yes = PII.
You don't need to store "diabetes" to reveal someone has diabetes. Behavioral patterns can imply health conditions just as clearly.
Examples of health context:A fitness app tracked "users who viewed diabetes content" → that's identifying people with potential diabetes. That's health context.
Key principle:If knowing someone did X would reveal something about their health condition, X is health context.
PII + Health Context (even behavioral) = PHI. This is true even if they're in separate systems/tables.
How correlation happens:Ask: "Could someone with access to our analytics determine who has what health condition?" If yes → you're creating PHI.
Common defense that fails:"The data is in different systems!" → Doesn't matter. If someone with access can correlate it, it's PHI.
Option A: Aggregate only - Track cohorts, never individuals. "500 users viewed diabetes content" but never "user X viewed Y".
Option B: Hash + generalize - Use irreversible hashes for IDs, generalize health features ("wellness" not "diabetes"), make re-identification impossible.
Option C: Separate pipelines - Run contact analysis (who are our users?) completely separately from behavior analysis (what features are popular?). Never join them.
How to choose:Technical controls that make correlation impossible, not just policy. "We promise not to join the data" is not enough.
If your analytics contain PHI (even inferential PHI), the analytics vendor is handling PHI and MUST have a BAA. Most don't offer BAAs at standard tiers.
Common platforms & BAA status:Even WITH a BAA, you must configure the tool correctly. Google Analytics with BAA still needs IP anonymization, user-ID scrubbing, and other protections enabled.
Red flag:If you're using free/starter tier of ANY analytics tool and tracking health behaviors, you're likely in violation.
Machine learning systems process large amounts of data, make inferences about individuals, and create new derived data. Each stage is a PHI exposure point.
Where PHI appears in ML:Storing ML training data in S3 bucket without proper access controls or BAA coverage, labeled with "patient_id" + "diagnosis".
Most analytics tools are installed with a single script tag and immediately start sending everything to third-party servers. What's being sent?
Automatic data collection includes:Company discovered they were sending {page: "/patient/12345/diabetes-treatment-plan", userId: "[email protected]"} to Mixpanel. Full PHI exposure to third party without BAA.
Auditor: "Show me evidence that your analytics don't contain PHI." Can you produce documentation right now?
What to document:"We don't think it's PHI" without evidence. "Our developers are careful" without documentation. "We've never had a problem" without testing.
Good answer:"Here's our data flow diagram showing PII is hashed before analytics. Here's our BAA with Mixpanel. Here's our quarterly audit showing no PHI in analytics payloads."
Why this matters:Fines and breach notifications aside, you need to prove to auditors you've thought this through. Documentation is evidence of due diligence.
❌ Unsafe Approach:
✅ Safe Approach:
Reality for builders: Modern healthcare applications rarely exist in isolation. You're constantly integrating CRMs, EHRs (Electronic Health Records), billing systems, scheduling tools, analytics platforms, and patient portals. PHI often emerges at these integration boundaries where "safe" data from different systems combines.
The Setup:
❌ Where PHI Gets Created:
🎯 Why This Matters:
The Setup:
❌ Where PHI Gets Created:
🎯 Critical for Builders:
The Setup:
❌ Where PHI Gets Created:
🎯 Builder Checklist:
The Setup:
❌ Where PHI Gets Created:
🎯 Critical Questions:
Problems:
Benefits:
Gold Standard:
You cannot protect data you don't know about. Before integrating systems, inventory exactly what each system contains.
Questions for System A (e.g., CRM):If you integrate System A + System B without this inventory, you're blindly creating PHI. You need to know WHAT you're combining BEFORE you combine it.
Red flag:"We'll figure out what data we need as we build the integration." No. Inventory first, design second, build third.
The instant PII meets health data, PHI exists. You need to know EXACTLY where this happens so you can protect that point.
Common combination points:Each combination point needs: proper logging controls, BAA-covered infrastructure, access controls, audit trails. Miss one point = PHI exposure.
Action:Draw a data flow diagram. Circle every place where PII and health data meet. That's your PHI attack surface.
Once created, PHI flows through your architecture. Every system it touches becomes a PHI handler requiring protections.
Typical flow example:API Gateway, Auth Service, Patient Service, Aggregation Service, Redis cache, logs at every layer. All need BAA coverage, encryption, access controls.
The forgotten systems:Document EVERY system in the flow. Verify each has appropriate safeguards. One unprotected link = breach path.
PHI protection is only as strong as the weakest link. If ANY system in your data flow lacks BAA coverage, you're in HIPAA violation.
Systems that need BAA coverage:"We have an AWS BAA!" → But does it cover the SPECIFIC services you use? AWS BAA might cover EC2 but not all analytics services. Read the fine print.
Verification checklist:Every system boundary logs something. APIs log requests. Message queues log messages. ETL jobs log transformations. These logs often contain PHI.
What typically gets logged at integrations:API Gateway logging full request bodies → logs contain {"email": "[email protected]", "diagnosis": "HIV"} → logs sent to CloudWatch → now CloudWatch contains PHI → needs BAA coverage + restricted access.
When you cache PHI, you're creating additional copies in additional systems, each needing protection. Cache = extra PHI storage.
Common cache locations in integrations:Don't cache PHI if possible. If you must: short TTL (minutes not hours), encrypted, BAA-covered infrastructure, strict access controls.
Before building an integration that creates PHI, ask: can we accomplish the goal WITHOUT identifiable data?
De-identification strategies:Need: Show provider which patients viewed their health portal
❌ Bad: JOIN patients (name, email) with portal_access (timestamps, viewed_pages)
✅ Good: Aggregate: "42 patients accessed portal in last week" (no individual identification)
Need: Analytics on feature usage by diagnosis
❌ Bad: Track "[email protected] clicked glucose tracking (diabetes diagnosis)"
✅ Good: Track "Cohort: Q3-2025-Diabetes-Patients, Feature: GlucoseTracking, Count: 847 clicks"
When you CAN'T de-identify:Some use cases legitimately need identifiable PHI (provider dashboards, patient portals). That's fine - but confirm it's necessary before building. Many assumed-necessary cases can actually work with hashed/aggregated data.
Once PHI reaches the browser, you lose control. Users can inspect network traffic, view local storage, take screenshots, use browser extensions that exfiltrate data.
Where PHI appears in browsers:/patient/12345/diabetes-plan exposes patient ID + conditionOpen DevTools, use your app, check Network tab and Application tab. If you see PHI, you're exposing it to an uncontrolled environment.
Safe Harbor is HIPAA's method for de-identifying data so it's no longer considered PHI. When properly applied, you can use the data for development, testing, and analytics without PHI restrictions.
Safe Harbor requires removing 18 types of identifiers (you'll learn all of them in Module 4). For now, focus on these three that appear in common technical scenarios:
You can share the first 3 digits of a ZIP code only if all ZIP codes starting with those 3 digits have a combined population of at least 20,000 people.
| Example | Combined Population | Can Share? | What to Use |
|---|---|---|---|
| ZIP 331XX (Chicago area) | 45,000 | ✅ Yes | "331XX" or "331**" |
| ZIP 059XX (Rural Vermont) | 12,000 | ❌ No | "000XX" (generic) |
Safe Harbor allows only the year from any date. All specific dates, months, quarters, or day-level information must be removed.
Any age over 89 must be grouped into a category like "90+" rather than showing the specific age. Ages 89 and under can be shown exactly.
| Original Ages | Safe Harbor Treatment |
|---|---|
| 23, 45, 67, 89 | ✅ Show as-is: 23, 45, 67, 89 |
| 91, 93, 95 | ✅ Aggregate: 90+, 90+, 90+ |
| 42, 67, 91, 35, 93, 28 | ✅ Mixed: 42, 67, 90+, 35, 90+, 28 |
Duration: 20-25 minutes
Use buttons below OR scroll to bottom for Next/Previous buttons
These principles guide every technical decision when working with PHI:
What this means in practice:
🎯 Why it matters for developers: Every copy of PHI creates a new attack surface and compliance obligation. The fewer places PHI exists, the easier it is to secure and audit.
What this means in practice:
What this means in practice:
De-identification is removing or obscuring PHI from datasets while preserving utility for development, testing, and analytics. Understanding these techniques is critical for technical teams.
Under HIPAA's Safe Harbor method, you must remove these 18 identifier types to de-identify data:
Three primary techniques for de-identifying data in technical systems:
| Method | What It Does | When To Use | Reversible? |
|---|---|---|---|
| Hashing | One-way transformation to fixed-length string | Need consistency (same input = same output) but no reversal | ❌ No |
| Encryption | Two-way transformation using a key | Need to retrieve original value later | ✅ Yes (with key) |
| Tokenization | Replace with random token, store mapping separately | Need reversibility + format preservation | ✅ Yes (with vault) |
Use Case: Logging user activity without exposing email addresses
Use Case: Storing PHI that needs to be decrypted for authorized use
Use Case: Testing with SSN/credit card processing logic
K-anonymity ensures any individual in a dataset cannot be distinguished from at least k-1 others based on quasi-identifiers (age, ZIP, gender).
| ❌ Not K-Anonymous | ✅ K-Anonymous (k=3) |
|---|---|
|
Age: 47, ZIP: 02138, Diabetes Age: 52, ZIP: 02139, Asthma Age: 31, ZIP: 02140, Hypertension |
Age: 40-50, ZIP: 021**, Diabetes Age: 40-50, ZIP: 021**, Asthma Age: 40-50, ZIP: 021**, Hypertension |
Key Technique: Generalization (age ranges) + Suppression (ZIP truncation) create groups of similar records.
Scenario: Even "de-identified" datasets can be re-identified when combined:
Solution: Use k-anonymity (k≥5) and never share datasets that could be linked!
Business Associate Agreements (BAAs) are contracts required under HIPAA, but technical teams often misunderstand what they actually mean for day-to-day work.
Reality: BAAs are often service-specific. Your BAA might cover S3 and RDS, but NOT CloudWatch Logs, Elasticsearch, or third-party integrations.
What you must check:
Reality: There's no such thing as "HIPAA certified." Vendors can be "HIPAA compliant," but YOU still need a signed BAA and proper technical controls.
What you must verify:
Reality: BAAs create shared responsibility. The vendor handles their infrastructure security, but YOU are responsible for:
Example: AWS has a BAA, but if you store PHI in an S3 bucket with public read access, that's YOUR breach, not Amazon's.
Your BAA with the covered entity requires you to implement:
Questions you must answer:
| Question | Why It Matters |
|---|---|
| 1. Do we have a signed BAA? | No BAA = cannot use with PHI, period |
| 2. What services does BAA cover? | May only cover specific features/tiers |
| 3. What configuration is required? | Encryption, private networks, access controls |
| 4. Where does data get stored? | Geographic/regulatory requirements |
| 5. What happens to our data when contract ends? | Data deletion obligations under BA agreement |
| 6. How do we fulfill OUR obligations? | Your BA agreement with covered entity |
❌ Wrong Approach:
"Datadog is HIPAA compliant, so I'll just add it to our stack."
✅ Right Approach - Technical Questions:
Duration: 8-10 minutes
You discover yesterday's backup script uploaded patient emails + appointment types to shared Google Drive (50 people have access).
Duration: 12-15 minutes
Generative AI has become essential for modern development, but healthcare developers face unique constraints. Understanding which tools you can use and how to use them safely is critical.
| Category | Examples | BAA Available? | Safe for PHI? |
|---|---|---|---|
| Public/Consumer AI | ChatGPT Free/Plus, Claude.ai, Gemini, Perplexity (personal accounts) | ❌ No | ❌ Never |
| Enterprise AI Platforms | ChatGPT Enterprise, Claude for Enterprise, Azure OpenAI | ✅ Yes (if configured) | ⚠️ Only with BAA + proper setup |
| Development AI Tools | GitHub Copilot, Cursor, JetBrains AI, Tabnine, Codeium | ⚠️ Varies by tier | ⚠️ Depends on version + config |
Code completion and AI coding assistants present unique challenges because they operate inside your development environment, seeing your code, comments, variable names, and potentially sensitive data.
What the AI learns from this code:
Create PHI-free development zones
Lock down AI tool access to sensitive repos
.gitignore-style rules to exclude PHI-containing files from AI indexingBuild organizational safeguards
If YES → Continue to Question 2
If NO → Safe to use (with normal security practices)
If NO → ❌ CANNOT USE with healthcare code
If YES → Continue to Question 3
If ALL YES → ✅ MAY use per organizational policy
If ANY NO → ❌ CANNOT USE until properly configured
| Policy Area | Key Questions |
|---|---|
| Approved Tools |
• Which AI tools have BAAs? • What tiers/versions are approved? • How often is this list updated? |
| Repository Classification |
• Which repos contain PHI/healthcare logic? • How are they tagged/labeled? • Different rules for frontend vs backend? |
| Developer Workflow |
• How to request AI tool access? • Mandatory training requirements? • Consequences for policy violations? |
| Incident Response |
• What if PHI is accidentally sent to AI? • Reporting process? • Remediation steps? |
This certifies that
Participant
has successfully completed the
Digital Badge: PHI/PII Technical Compliance v2.3
Date of Completion:
Training Mode:
Certificate Validity: 12 months
Tom Smolinsky, CISSP
Training Administrator
VITSO Healthcare Compliance
Date