<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Blogs on Kyler Nats | Cybersecurity Portfolio</title><link>https://kylernats.github.io/personal-blog/blog/</link><description>Recent content in Blogs on Kyler Nats | Cybersecurity Portfolio</description><generator>Hugo -- 0.159.1</generator><language>en-us</language><atom:link href="https://kylernats.github.io/personal-blog/blog/index.xml" rel="self" type="application/rss+xml"/><item><title>Navigating the CMMC 2.0 Framework</title><link>https://kylernats.github.io/personal-blog/blog/cmmc-framework/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://kylernats.github.io/personal-blog/blog/cmmc-framework/</guid><description>&lt;hr&gt;
&lt;p&gt;As a Master&amp;rsquo;s student immersed in cybersecurity frameworks, I&amp;rsquo;ve been particularly focused on CMMC 2.0. It&amp;rsquo;s more than just another set of controls, it represents a critical shift in how the Department of Defense (DoD) manages supply chain risk. For any organization looking to engage with the DIB, understanding this framework isn&amp;rsquo;t just about compliance, it&amp;rsquo;s about operational strategy.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="what-is-cmmc-20"&gt;What is CMMC 2.0?&lt;/h3&gt;
&lt;p&gt;At its core, CMMC 2.0 is the DoD&amp;rsquo;s answer to cyber vulnerabilities across its supply chain. It&amp;rsquo;s a verification program designed to ensure that defense contractors are actually protecting sensitive unclassified information. Rather than simply relying on contractors to say they&amp;rsquo;re secure, CMMC 2.0 mandates actual proof.&lt;/p&gt;</description></item><item><title>Prompt Injection vs. Jailbreaking</title><link>https://kylernats.github.io/personal-blog/blog/ai-security-jailbreaking/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://kylernats.github.io/personal-blog/blog/ai-security-jailbreaking/</guid><description>&lt;hr&gt;
&lt;p&gt;Artificial Intelligence tools are powerful. But like any system, they can be manipulated. Two common attack types you may hear about are prompt injection and jailbreaking. Let&amp;rsquo;s start by breaking down what each is.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="what-is-prompt-injection"&gt;What Is Prompt Injection?&lt;/h3&gt;
&lt;p&gt;Prompt injection happens when someone hides malicious instructions inside input data to trick an AI system.&lt;/p&gt;
&lt;p&gt;The AI believes it is reading normal content. But hidden inside that content are instructions meant to change how the AI behaves.&lt;/p&gt;</description></item></channel></rss>