After doing some tests with git fetch -vvv command I got the following error
fatal: unsafe repository ('/opt/atlassian/pipelines/agent/build' is owned by someone else)
Apparently the $BUILD_DIR folder is not owned by the user running the pipeline by default – like it used to be until recently. As a workaround this problem you can add a global config to the local git repo before running any git command
git config --global --add safe.directory /opt/atlassian/pipelines/agent/build
This command should allow you to interact with the git repo.
References
https://github.com/actions/checkout/issues/760#issuecomment-1097501613
https://git-scm.com/docs/git-config/2.35.2#Documentation/git-config.txt-safedirectory
]]>
In the screenshot you can see spam assassin gives 1.4 point if the date header is not included. Let’s add it to our EmailMessage object
from email.message import EmailMessage
...
msg = EmailMessage()
msg['Subject'] = f'The contents of {textfile}'
msg['From'] = me
msg['To'] = you
msg.add_header(
"Date", "{:%d %b %Y %H:%M:%S}".format(datetime.datetime.now())
)
This will produce the following header in the message
Date: 04 Feb 2022 16:05:36
Similarly to the date, you can use a native method from the python email module to generate a valid message ID for you email
msg.add_header('Message-Id', f'{make_msgid(domain="mydomain.com")}')
This will produce a string suitable for RFC 2822 compliant message ID, e.g.
Message-Id: <[email protected]>
For this one we need to use a format to display the recipient name instead of just the email, like so
# recipients = [{"name": "John Doe", "email": "[email protected]"}]
msg["To"] = ", ".join(
["{} <{}>".format(r["name"], r["email"]) for r in recipients]
)
Which produces the To header
To: John Doe <[email protected]>
After making these changes the spam score goes down by 3.5 points, which is a huge improvement.
References
https://docs.python.org/3/library/email.examples.html#email-examples
]]>Step 1: Run this script to setup native NFS for docker. NOTE: I have tested this on Big Sur and it works but you may need to adjust some of the paths depending on your OSX version.
#!/usr/bin/env bash
OS=`uname -s`
if [ $OS != "Darwin" ]; then
echo "This script is OSX-only. Please do not run it on any other Unix."
exit 1
fi
if [[ $EUID -eq 0 ]]; then
echo "This script must NOT be run with sudo/root. Please re-run without sudo." 1>&2
exit 1
fi
echo ""
echo " +-----------------------------+"
echo " | Setup native NFS for Docker |"
echo " +-----------------------------+"
echo ""
echo "WARNING: This script will shut down running containers."
echo ""
echo -n "Do you wish to proceed? [y]: "
read decision
if [ "$decision" != "y" ]; then
echo "Exiting. No changes made."
exit 1
fi
echo ""
if ! docker ps > /dev/null 2>&1 ; then
echo "== Waiting for docker to start..."
fi
open -a Docker
while ! docker ps > /dev/null 2>&1 ; do sleep 2; done
echo "== Stopping running docker containers..."
docker-compose down > /dev/null 2>&1
docker volume prune -f > /dev/null
osascript -e 'quit app "Docker"'
echo "== Resetting folder permissions..."
U=`id -u`
G=`id -g`
sudo chown -R "$U":"$G" .
echo "== Setting up nfs..."
LINE="/System/Volumes/Data -alldirs -mapall=$U:$G localhost"
FILE=/etc/exports
sudo cp /dev/null $FILE
grep -qF -- "$LINE" "$FILE" || sudo echo "$LINE" | sudo tee -a $FILE > /dev/null
LINE="nfs.server.mount.require_resv_port = 0"
FILE=/etc/nfs.conf
grep -qF -- "$LINE" "$FILE" || sudo echo "$LINE" | sudo tee -a $FILE > /dev/null
echo "== Restarting nfsd..."
sudo nfsd restart
echo "== Restarting docker..."
open -a Docker
while ! docker ps > /dev/null 2>&1 ; do sleep 2; done
echo ""
echo "SUCCESS! Now go run your containers"
Step 2: Create your NFS volume using docker volume command or use docker-compose. My YAML file looks like this
# Dev container service
...
volumes:
- "nfsmount:/home/appuser/code"
...
# Volumes definition
volumes:
...
nfsmount:
driver: local
driver_opts:
type: nfs
o: addr=host.docker.internal,rw,nolock,hard,nointr,nfsvers=3
device: ":/System/Volumes/Data/${PWD}"
Step 3: Rebuild your container.
For more info follow this tutorial
Resource: https://vivait.co.uk/labs/docker-for-mac-performance-using-nfs
]]>
my_list: []
Refererence:
]]>
[Error - 11:54:07 AM] Enumeration of workspace source files is taking longer than 10 seconds.
This may be because:
* You have opened your home directory or entire hard drive as a workspace
* Your workspace contains a very large number of directories and files
* Your workspace contains a symlink to a directory with many files
* Your workspace is remote, and file enumeration is slow
I tried excluding certain files of my workspace to reduce this time, by configuring [tool.pyright] directly on my pyproject.toml file:
[tool.pyright]
exclude = [
".cache",
".pytest_cache",
".git",
]
These subfolders are not needed for development and after restarting manually the language server, the error is gone. I also notice a significant speed-up on the language server highlighting of errors
References
https://github.com/microsoft/pyright/blob/main/docs/configuration.md
]]>
{
"username": "My slack bot",
"channel": "tech",
"attachments": [{
"fallback": "New incoming message", // rendered when attachment cannot get rendered e.g.: push notifications
"pretext": "New incoming message",
"text": "New message!",
"color": "good",
"fields": [{
"title": "Name",
"value": "John Doe",
"short": true
},
{
"title": "Comments",
"value": "This is a very long text and will be rendered correctly",
"short": false
}]
}]
Will render a nice slack message with a different spacing for each data point in "fields". The name of the bot can be personalised and the default channel overridden with the "username" and "channel" props respectively.

I think it is an odd choice to design the Slack API with props like "username" and "channel" optional defaulting to some values defined when setting up the webhook.
I would make these required so it is more clear that the webhook URL you get is global to your Slack community, which I don’t think it is very obvious at first.
References:
Slack Message Builder: https://api.slack.com/docs/messages/builder
Slack docs
]]>I have a Card component created using styled-components that gets translated into a div like this
<div class="Card_card__H374l">...</div>
The beginning of the class attribute value is predictable, however the second is randomly assigned by React in order to keep track of it. So to locate the Card component in cypress I can use
cy.get('*[class^="Card"]')
The CSS selector allows you to keep the test readable without having to add many data- attributes to your application JSX
References:
]]>I ended up using a recursive approach with generators in python
def nested_dict_lookup(key, value, dictionary):
for k, v in dictionary.items():
if k == key and v == value:
yield dictionary
elif isinstance(v, dict):
for result in nested_dict_lookup(key, value, v):
yield result
elif isinstance(v, list):
for d in v:
for result in nested_dict_lookup(key, value, d):
yield result
Lets look at a test method that illustrates how nested_dict_lookup works
def test_nested_obj_lookup():
d = {
"b": {"c": "2"},
"d": {"e": [{"e": "3"}]},
"a": {
"b": {"c": "1", "d": [{"e": "1", "f": "2"}, {"e": "3", "f": "4"}]}
},
"c": {"b": [{"e": "3", "f": "2"}]},
}
assert list(nested_obj_lookup("e", "3", d)) == [
{"e": "3"},
{"e": "3", "f": "4"},
{"e": "3", "f": "2"},
]
results = nested_obj_lookup("e", "3", d)
assert next(results, None) == {"e": "3"}
assert next(results, None) == {"e": "3", "f": "4"}
assert next(results, None) == {"e": "3", "f": "2"}
assert next(results, {"value": None}) == {"value": None}
As said before nested_obj_lookup returns a generator, and every time you iterate over it using next you’ll get the next result found in the dictionary.
A better approach is to use AWS IAM roles that give access to the necessary resources from your Bitbucket account.
In a nutshell
aws codepipelinesResources
]]>updated_at to keep records with some activity at the top of your list.
The problem is updated_at can be NULL if your record has recently been created. To handle these cases cases you can use nullsfirst and nullslast for the ascending and descending cases respectively.
from sqlalchemy.sql.expression import nullsfirst, nullslast
...
User.query.order_by(nullsfirst(User.updated_at.asc())) # Recently created users first
...
User.query.order_by(nullslast(User.updated_at.desc())) # Recently created users last
References
]]>