Author: Adam Clark

  • First Malicious MCP and Mitigation’s

    First Malicious MCP and Mitigation’s

    As you may have heard: the first malicious MCP[Model Context Protocol] was found in the wild [link + archived of coursex2 ]. While many articles tout this not a ton go over mitigation strategies. The good news is that many “traditional” security control mechanisms could help. Thinking off the top of my head I can think of a couple:

    Preempting issue beforehand:

    Unfortunately hindsight can be 20/20 but there are things you can do before hand to try to preempt these issues such as:

    – Administrative Controls: Risk assessment
    – Artifact management
    – SCA [software composition analysis] or BOM [bill of materials]
    – Dependency pinning

    Many developers feel slowed down by needing to go to yet another review-board but these provide the valuable opportunity to ask basic questions like “Does this tool log to our enterprise logging solution?” and others. Going through the rigmarole can help ensure everyone is security/architecture minded and has a good guidance to mitigate risks [or at-least implement controls to remediate afterwords].

    In addition artifact management tools [like artifactory, sonatype,… others] could also help by preventing developers from pulling unknown artifacts from the web during build time. Unfortunately this has the perception of slowing down developers and doesn’t work for artifacts pulled outside of your enterprise solution.

    Also there are tools within the artifact management domain such as SCA and BOM to be able to identify artifacts and where they’re used etc… This might be something for you to look into as well.

    Another thing to consider [which may or may not have helped in this case] is dependency pinning. Instead of automatically upgrading dependencies you might want to have them stay at a particular version or (current -1). Unfortunately this approach has the gap of needing developers to manually update dependencies which can get laborious.

    Preempting In-flight:

    While a request is traveling over the wire there might be ways to preempt/prevent this as a technical compensating control. i.e.:

    -Least Privilege: MTA deny BCC functionality / deny-list

    Least privilege is a common security tenant: i.e. give the application[or thing..] the lest amount of privileges needed to do it’s job. Let’s say your app doesn’t use BCC consider seeing if it’s possible to turn it off & log+alert when it’s being used. While I’m not sure if it’s possible for the MCP issue at hand it’s a good thing to consider in general.

    Detection/After/Remediation

    Unfortunately in the real world these things happen being able to identify and remediate the issue is paramount. Here are some things I can think of:

    -Logging: Service itself or DPI
    -Artifact management w/ logging
    -MCP registry

    Logging is one of the common ways of looking for issues. If you have the logs from either the service itself or DPI[Deep packet inspection] you could probably look through those using a log analyzer [like splunk, sumologic, cloudwatch etc..] to identify patterns of malicious activity [i.e. emails BCC’d to `phan@giftshop [dot] club` ]. Unfortunately people might not have logging setup correctly or truncate the logs in a small time period. Also some tools make setting up logs difficult and may need work arounds such as using webhooks or custom coding a lambda/chron job to get them from point A to B.

    In addition you may want to play “who done it” i.e.: identify teams using bad dependencies and tap them on the shoulder to fix it. One way of doing that is via your artifact management solution [like artifactory, sonatype,… others]. If you have logging setup you could try to pull a report saying X “users” are pulling the bad artifact [although mapping that machine-to-machine service account to a git repo might be another story]. Also another gap is not everybody in your org might be using the artifact solution to pull artifacts. In such as case you may want to “fall back” to using logs from a egress gateway/DPI/network monitoring solution [i.e. palo alto, corelight zeek, squid proxy etc…] for which artifacts are being pulled where; although mapping IPs to human owners can be difficult.

    Another things too is the emergence of MCP registries. As this technology becomes more popular and utilized in the enterprise setting it may help identify who is using which MCP servers as well as decrease time to market by cataloging known good MCP servers.

    As we can see there are many ways using traditional security controls can help. As a bit of a disclaimer though your mileage may vary 🙂

    Here’s a diagram of what a typical enterprise setup could look like:

    A technical architecture diagram illustrating a secure enterprise environment, showing how public artifacts like NPM interact with private VPCs, CI/Git, Artifact Management, Egress/Ingress, Prod Services, Email SVC, Customers, and a Logging Analyzer, with dashed lines indicating logging pathways.

    Works Cited:

    Most of this article was based off my industry experience as a IT engineer.

    Original article from https://www.koi.security/blog/postmark-mcp-npm-malicious-backdoor-email-theft | Archived | X2

    infosecurity-magazine article https://www.infosecurity-magazine.com/news/malicious-ai-agent-server/ | Archived | X2

    FYI the thumbnail for this article was generated by generative-ai technologies namely my personal account for google Gemini. Thanks Gemini!

  • Things to do when getting a new domain

    1. Helpful links
    2. Host a website
      1. WordPress.com Hosting
      2. Don’t forget to configure your WordPress
      3. Alternatives to consider
    3. Get access to webmaster tools
      1. Google
      2. Bing
      3. Other Webmaster tools
    4. Get access to analytics tools
      1. Google Analytics
      2. Cloudflare Analytics
      3. Microsoft Clarity Analytics
    5. Create a route53 public hosted zone
    6. FOOTNOTES:

    As you may have noticed I got a new domain(s) adamclark.me and adamjamesclark.com .
    When I got the new domains there are a couple of things I’ve done & I wanted to catalog them here.

    As a matter of transparency, you can google most of these things 🙂 having them in one place does provide value to me though.

    Here are some links to other content I found useful. Do your own research though & don’t trust everything on the internet 😉
    https://dnschecker.org/
    https://wordpress.com/support/sitemaps/
    https://wordpress.com/support/markdown-quick-reference/

    Host a website

    Just buying the domain doesn’t create a website, so you’ll need to do that yourself or get hosting.

    WordPress.com Hosting

    I’ve chosen to do wordpress.com for my hosting.

    Some things I like are:

    • Fairly inexpensive
    • Easy to export from the admin console
    • WYSIWYG[^2] editing, I don’t need to make a PR[^3] then publish a static site

    Some things I don’t like are:

    • Editor is obtuse and harder to use than it should be, this could just be me needing to get used to it though

    Don’t forget to configure your WordPress

    • Make the site private before you’re ready to publish it
    • Add tags & categories in the wp-admin section
    • Add a link to the RSS feed & sitemap to the homepage so it’s handy for search engines to find

    Alternatives to consider

    While you could use wordpress.com to host a site there are other ways too:

    • Create a static HTML page & put it in a s3 bucket
    • Create a github action to add pages to a s3 bucket on a PR
    • Other hosting providers such as wix, squarespace
    • Other wordpress/drupal/joomla hosting providers etc….

    Get access to webmaster tools

    These go by various names such as search console etc… the functionality is similar though: being able to gain insight how search engines see your website and be able to make changes to get it to rank better in search.

    Google

    Getting access to this is simple, I used their DNS validation to create a TXT record. FYI you’ll need to create the text record at the domain level such that:

    host=@
    value=google-site-verification=*************_*_*******************************
    type=TXT
    ttl=15min
    

    FYI I used a short TTL incase I fat-fingered copying the value, but once it’s working consider using a higher TTL.
    I also masked the token in this example so nobody accidentally copy-pastes it because it only works for my domain 🙂

    Bing

    Getting access to this is simple, I used their DNS validation to create a CNAME record. Bing had me create a random subdomain with the value verify.bing.com.

    host=*********************************.adamclark.me
    value=verify.bing.com
    type=TXT
    ttl=15min
    

    FYI I used a short TTL incase I fat-fingered copying the value, but once it’s working consider using a higher TTL.
    I also masked the token in this example so nobody accidentally copy-pastes it because it only works for my domain 🙂

    Other Webmaster tools

    It seems like everybody is making webmaster tools now a days. Facebook, Pinterest, and Yandex also have tools.
    While I didn’t enable these they are options.

    Get access to analytics tools

    Getting access to these tools is similar to the webmaster tools, I used the DNS based methods to add my site.

    Google Analytics

    Cloudflare Analytics

    Microsoft Clarity Analytics

    Create a route53 public hosted zone

    I have a personal AWS[^1] account for hosting pet-projects and POC/POT’s[^4] for personal use. So far I haven’t used a domain but I added my new domains to route 53 so I can easily access/edit DNS in route53.

    1. Create the public hosted zone in AWS
    2. Copy the NS records it generated in the new public zone & copy-paste to your registrar
    3. Validate via nslookup

    FOOTNOTES:


    [^1]: AWS stands for Amazon Web Services
    [^2]: WYSIWYG is an acronym for what you see is what you get
    [^3]: PR is a acronym for pull request, a request to “pull” code from one branch to another, typically used when working in github source control.
    [^4]: POC: Proof of concept, validating a conscept works, typically slightly bigger scale than a POT
    [^5]: POT: Proof of technology, validating a individual/small scale use-case works in a given technology

  • Hello World!

    This is an example post!