Code Archives - Sourcetoad https://sourcetoad.com/category/code/ Fri, 22 Aug 2025 18:12:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://sourcetoad.com/wp-content/uploads/2023/10/fav.png Code Archives - Sourcetoad https://sourcetoad.com/category/code/ 32 32 209818924 Should You Let AI Write Code for You? Productivity vs. Risk https://sourcetoad.com/should-you-let-ai-write-code-for-you-productivity-vs-risk/ Fri, 22 Aug 2025 18:10:24 +0000 https://sourcetoad.com/?p=27655 Used thoughtfully, AI can help engineers move faster, but it should never replace deep domain knowledge, security, and reliability overseen by seasoned developers.

The post Should You Let AI Write Code for You? Productivity vs. Risk appeared first on Sourcetoad.

]]>
An isometric illustration of a laptop in purples and greens with icons and tabs. A robot is hovering nearby as an assistant.

AI coding assistants have become central to modern software development. The CEO of Robinhood recently stated that nearly 50% of their new code is AI-generated, and nearly 100% of their engineers regularly use AI tools like Copilot or Cursor. Atlassian’s study showed 68% of developers are saving at least 10 hours per week thanks to AI. However, another report from METR revealed that experienced developers working with AI tools were 19% less productive in controlled tests, compared to manual coding.

For executives making strategic decisions, these findings mean AI can be a powerful productivity multiplier, but it also introduces inefficiencies and risks. Used thoughtfully, AI can help engineers move faster, but it should never replace deep domain knowledge, security, and reliability overseen by seasoned developers.

Where AI Writes Real Value

Studies show that AI coding tools consistently offer significant gains in both speed and quality. A survey of 600+ developers revealed 78% reported productivity improvements, with 59% confirming code quality rose alongside productivity. GitHub Copilot usage led to 55% faster HTTP server implementation in a controlled trial. Another study at Google with 96 engineers found a 21% reduction in task time.

In real-world enterprise settings, McKinsey data suggests developers using AI finish coding tasks twice as fast, and research from Axify and others indicates that automating repetitive coding chores like documentation and tests can free teams to focus on innovation. Taken together, these findings underscore that AI is no longer just a promising experiment in coding, it is consistently proving to deliver measurable efficiency and quality gains across teams and industries.

Where AI Falls Short: Risk Zones to Watch

Despite clear benefits, AI-generated code carries limitations. The METR study showed experienced developers using AI were 19% slower, as they spent time correcting, prompting, or waiting for AI outputs. Only 44% of AI code was accepted without modification, and 9% of development time went to cleanup.

AI also struggles with complex tasks, such as multi-file codebases, nuanced business logic, and proprietary libraries, where human context is necessary. Furthermore, security, architectural integrity, and production readiness demand discipline only experienced engineers can enforce.

This is not anything new for the tech industry. For novice developers pre-AI, this was referred to as “cowboy coding,” informal and structured coding without following formal processes and frameworks. What we are seeing with AI right now is highly reminiscent of that type of coding, but with machines that don’t sleep—unless of course they are programmed to!

Where Real Engineers Earn Their Keep

AI assistants work best with clear and contextual prompts. However, expertise is needed to interpret generated suggestions, evaluate architectural design, and verify security. Real engineers establish CI/CD pipelines, enforce role-based access, manage secrets, and design observability tools to maintain code integrity. Security is non-negotiable. Manual vetting of AI-supplied code is essential to detect vulnerabilities or incorrect dependencies. Engineers also architect scalable and fault-tolerant infrastructure, ensuring AI-generated advice is production-safe, compliant, and maintainable. 

Engineers reinforce their value by building durable mental models of the systems they maintain. These models are not abstract intuition alone but are constructed from the outputs of automated analysis, the clarity of happy-path documentation, and the accumulated evidence of regression testing. By synthesizing these inputs into a coherent understanding of both code structure and runtime behavior, engineers create a framework that allows them to rapidly contextualize and address issues as they arise. 

This ensures that when real users inevitably encounter failures, the response is not guesswork but grounded diagnosis informed by a deep internal map of the system’s moving parts. Such discipline transforms AI-generated contributions into production-safe assets, bridging the gap between automated suggestion and long-term maintainability. 

Framing the AI-Human Partnership Strategically

For executives, the goal is to maximize returns while minimizing risk. Here’s a balanced playbook:

    1. Apply AI for low-risk, repetitive tasks: unit tests, documentation, boilerplate, code examples.
    2. Keep engineers in the loop for architectural decisions, database design, complex API logic, and system integration.
    3. Track both velocity and quality: use metrics like cycle time, defect escape rate, and code review success alongside productivity gains.
    4. Invest in oversight and context: human auditing, prompt discipline, governance, and security checks bridge the gap between AI speed and production safety.

Conclusion

AI coding assistants are proven to accelerate development and ease repetitive work. But as the data shows, their value comes when paired with experienced engineers who provide the oversight, strategy, and security AI can’t deliver on its own. The organizations that will win with AI are those that strike the right balance: leveraging automation where it drives efficiency, while relying on human expertise to ensure scalability, compliance, and long-term reliability.

At Sourcetoad, we help companies navigate this balance. Whether you’re looking to train your team on best practices for AI adoption or build AI-powered software tailored to your business, our experts can guide you through the risks and opportunities. Contact us today to start building smarter with AI.

Quick Takeaways

    • AI tools can double or triple development speed, often improving code quality alongside.
    • Studies show mixed results for experienced developers, with some seeing slower output due to overhead.
    • AI excels at boilerplate, documentation, and refactoring, but engineers must oversee production, integration, infrastructure, and security.
    • Set up metrics that reflect velocity, quality, and risk—don’t rely on speed alone.
    • Using AI judiciously in tandem with expert engineers delivers both productivity boosts and operational safety.

FAQs

Can AI replace software engineers? AI cannot replace engineers; it augments them by speeding up routine work. Complex logic, architecture, and risk management still need human expertise.

How much faster does AI make code development? Studies show AI can reduce task times by 20% to 60%, depending on task complexity and developer experience.

Does AI improve code quality? In many cases, yes. Surveys found 59% saw improved quality with AI, especially when coupled with human reviews.

What tasks are best suited to AI? AI is best for unit tests, boilerplate, documentation, prototyping, and code refactoring—tasks that don’t require deep contextual knowledge.

How should executives evaluate AI ROI? Use a dashboard tracking cycle time, release frequency, defect escape rate, and cost-saving through productivity. Compare AI gains against overhead like cleanup and review time.

The post Should You Let AI Write Code for You? Productivity vs. Risk appeared first on Sourcetoad.

]]>
27655
Laravel on AWS Fargate at Sourcetoad https://sourcetoad.com/laravel-on-aws-fargate-at-sourcetoad/ Thu, 16 Nov 2023 15:09:29 +0000 https://sourcetoad.com/?p=22141 Sourcetoad's Laravel on ECS Guide: Comprehensive tips for enterprise-grade migration.

The post Laravel on AWS Fargate at Sourcetoad appeared first on Sourcetoad.

]]>

Introduction

Here at Sourcetoad, we’ve been working on migrating our PHP framework—mostly Laravel-based projects—to Amazon Web Services’ Elastic Container Service (ECS). As we worked through the process, we were surprised at how scattered around the web good information seemed to be. There certainly was no comprehensive guide to run Laravel applications on ECS in what we would consider an enterprise-grade fashion.

Don’t get us wrong: many guides exist, but they appeared to be leaving out important information. Some guides had part of the story, and others had the other part. A person needs to already understand AWS best practices to really know what is left out of each one. We thought it would be interesting to share what we have done in a comprehensive guide. This guide is for people who already have a basic understanding of how to run Laravel in Docker and use ECS. It focuses on the hardening and optimizations we have done.

 

Migration to Alpine

Our development team has been using Docker locally for quite some time. This allows for super easy project spin up by new team members as well as consistency in environments. Since a few hundred megabytes here or there isn’t much of a concern on local environments, we were using the default Debian-based PHP-FPM Docker images.

We knew that we wanted to run Alpine-based images in production. Since these images are significantly smaller, they can be deployed faster both from the CI/CD servers to AWS and from AWS’s Elastic Container Registry (ECR) to ECS.

 

Multistage Builds

The next step to migrating to a single Dockerfile to be used both locally and on ECS was moving our Dockerfiles to utilize multistage builds. Developers need tools inside their docker container such as xdebug and composer that we don’t need inside of the deployment containers. Similarly, deployment containers need to copy instead of volume mount, use separate writable volumes for read-only file systems (we will get to that below), and other things.

Multistage builds allow us to easily build common parts of the container then make a separate build stage for development images. We start with the common items shared between both deploy and local development as so:

FROM php:8.1-fpm-alpine AS base
 
RUN apk -U add --no-cache \
	bash zip unzip gzip wget libzip libgomp libzip-dev libpng libpng-dev freetype-dev icu icu-dev libjpeg \
	libjpeg-turbo-dev libpq perl shadow libpq-dev imagemagick imagemagick-dev linux-headers util-linux && \
	apk add --no-cache --virtual .phpize-deps $PHPIZE_DEPS && \
	pecl install imagick redis && \
	docker-php-ext-configure gd --with-jpeg --with-freetype && \
	docker-php-ext-enable imagick redis && \
	docker-php-ext-install gd calendar zip bz2 intl bcmath opcache pdo pdo_mysql && \
	apk del .phpize-deps

Next we will add the local items as a section from base:

# Local (Developer)
FROM base AS local
 
RUN apk -U add --no-cache mariadb-client && \
	apk add --no-cache --virtual .phpize-deps $PHPIZE_DEPS && \
	pecl install xdebug && \
	docker-php-ext-enable xdebug && \
	apk del .phpize-deps
 
WORKDIR /code

Finally, we will add the deployment items as a section from the base:

FROM base AS deploy
 
RUN apk -U upgrade && apk add --no-cache supervisor aws-cli
 
RUN cp /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini && \
	rm /usr/local/etc/php-fpm.d/docker.conf && \
	sed -i -e "s/;date.timezone\s*=.*/date.timezone = 'UTC'/" /usr/local/etc/php/php.ini && \
	sed -i -e "s/.*expose_php\s*=.*/expose_php = Off/" /usr/local/etc/php/php.ini
	### Add further customizations to php.ini, www.conf, etc here
 
COPY ./docker/code/config/deploy/supervisor.conf /etc/supervisord.conf
COPY ./docker/code/config/deploy/entrypoint.sh /usr/bin/entrypoint.sh
COPY ./docker/code/config/deploy/crontab /etc/crontabs/www-data
COPY ./ /var/www/html
 
RUN chmod 755 /usr/bin/entrypoint.sh && \
	chmod 777 /tmp && \
	chown -R www-data.www-data /var/www && \
	rm -rf /var/www/html/docker
 
VOLUME /var/lib/amazon
VOLUME /var/log/amazon
VOLUME /var/www/html/bootstrap/cache
VOLUME /var/www/html/storage
VOLUME /opt/php
VOLUME /tmp
 
WORKDIR /var/www
 
CMD ["/usr/bin/entrypoint.sh"]

Let’s look at some of the more interesting items in this section: 

RUN apk -U upgrade && apk add --no-cache supervisor aws-cli

A lot of debate exists about if you should run apk upgrade prior to deployment. At huge organizations with legions of dedicated devops people to recreate releases for every security update, it makes sense that you would want completely reproducible builds.

However, like some other Linux distributions, Alpine makes a concerted effort to ensure API/ABI compatibility of package updates for security. We feel, for all but the largest enterprises, it makes sense that the deployment package to get the latest security updates. That way, if a vulnerability comes out in a specific version of Alpine, the container can be rebuilt and deployed even if the php-fpm maintainers haven’t updated their base version yet.

We are also installing supervisor and the aws-cli. The aws-cli is used to allow Amazon’s ECS exec to work in the container for debugging purposes.

Note: screen is also needed for this to work, but that is included in util-linux package in the base stage.

Supervisor is being used so that the container can run laravel queues and crons, as well as php-fpm. Ideally, large projects would have completely separate tasks that just run queues and crons. However, as the vast majority of our projects have extremely lightweight queue and cron tasks, it didn’t make sense to have even a couple of ¼ size Fargate CPU tasks dedicated to them.

COPY ./docker/code/config/deploy/supervisor.conf /etc/supervisord.conf

This is our supervisor entry that runs the following tasks: crond, the queue workers and php-fpm.

Best practice in Docker is to log the error and other log-type events to the stdout and stderr. Supervisor has built in log rotation. That needs to be disabled by setting the maxbytes to 0 for all the log file definitions. That can be seen here.

[supervisord]
nodaemon=true
user=root
logfile=/dev/stdout
logfile_maxbytes=0
pidfile=/opt/php/supervisord.pid
 
[unix_http_server]
file=/opt/php/supervisord.sock
 
[supervisorctl]
serverurl=unix:///opt/php/supervisord.sock
 
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
 
[program:php]
command=php-fpm
autostart=true
autorestart=true
user=root
startretries=100
numprocs=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
 
[program:crond]
command=crond -f
autostart=true
autorestart=true
user=root
startretries=100
numprocs=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
 
[program:laravel-queue-worker]
command=/usr/local/bin/php /var/www/html/artisan queue:work --env=%(ENV_ENV)s --timeout=0 --tries=3
autostart=true
autorestart=true
user=www-data
startretries=50
numprocs=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
COPY ./docker/code/config/deploy/entrypoint.sh /usr/bin/entrypoint.sh

This entrypoint pulls in a few things, such as the .env file to a writable location on the filesystem. However, the most interesting thing at it are the end lines:

cd /var/www/html || exit 1
 
su -s /bin/sh www-data -c "php artisan optimize"
 
if [ -n "$RUN_MIGRATIONS" ]; then
	su -s /bin/sh www-data -c "php artisan migrate --no-interaction --force --isolated"
else
	/usr/bin/supervisord -c /etc/supervisord.conf
fi

This is pretty cool. It allows us to use the same docker container to do both migrations and run the application. We just create another task definition with the RUN_MIGRATIONS variable defined. We have our CI/CD run this task first, before updating the service with the new containers during a release.

VOLUME /var/lib/amazon
VOLUME /var/log/amazon
VOLUME /var/www/html/bootstrap/cache
VOLUME /var/www/html/storage
VOLUME /opt/php
VOLUME /tmp

We are going to discuss the advantages of the read-only root file system in the security section. When this is set, all files inside the root file system are read-only. This means any locations that need to accept data writes need to be either defined as separate volumes in the docker file, bind mounts in the ECS task definition, or EFS mounts.

Bind mounts are used inside the ECS task definition when ephemeral data needs to be shared between multiple tasks. We utilize this in some older Yii2 projects where the asset cache is built at runtime and needs to be shared between the nginx and php-fpm containers. Laravel prebuilds these assets, and the public directory just needs to be copied into the nginx container.

EFS mounts are used when data is not ephemeral and needs to survive service/container destruction. Ideally, a system stores this type of data on something like S3, but doing so is not always practical.

Finally, the volumes defined in the docker file are used for ephemeral storage that exists only within the one container for its lifespan. 

  • The first two are used by Amazon for ECS Exec.
  • The next two are the standard places Laravel needs to write data.
  • /opt/php will be used for placing the .env file, other supervisor pid files, etc
  • /tmp is the default temp location on linux

 Since we the base app path isn’t writable and be used for the .env file, we had to make a slight modification to the Laravel bootstrap app.php:

<?php
 
/*
|--------------------------------------------------------------------------
| Create The Application
|--------------------------------------------------------------------------
|
| The first thing we will do is create a new Laravel application instance
| which serves as the "glue" for all the components of Laravel, and is
| the IoC container for the system binding all of the various parts.
|
*/
 
$app = new Illuminate\Foundation\Application(
	$_ENV['APP_BASE_PATH'] ?? dirname(__DIR__)
);
 

// Ensure in ECS we override location where .env is at, fallback if not set.
// This is prior to dotenv being invoked - so check getenv() directly.
$envLocation = getenv('APP_ENV_PATH') ?: '';
$app->useEnvironmentPath($envLocation);

When we define APP_ENV_PATH in our task definition to something like /opt/php/, Laravel will then use that path to look for the env file.

The nginx Dockerfile is quite a bit simpler, as the only customizations we use are for deployment.

FROM nginx:stable-alpine as base
FROM base as deploy
 
RUN apk -U upgrade && apk add --no-cache util-linux
 
COPY ./docker/nginx/config/deploy/nginx.conf /etc/nginx/nginx.conf
COPY ./docker/nginx/config/deploy/self-signed.com.* /etc/nginx/
 
RUN mkdir -p /var/www/html/public
COPY ./public /var/www/html/public
 
VOLUME /var/lib/amazon
VOLUME /var/log/amazon
VOLUME /run
VOLUME /var/cache/nginx
 
WORKDIR /var/www/html
 
EXPOSE 80 443

The most interesting line in the nginx Dockerfile is below:

COPY ./docker/nginx/config/deploy/self-signed.com.* /etc/nginx

For our SOC2 compliance, we use encryption in transit everywhere, even within the private VPCs of AWS. As per the AWS documentation, the ALB does not verify SSL certificates, it ensures the traffic is going to the correct location through other methods. For the ALB to Target Group Member traffic, they recommend using self-signed certificates. 

During our CI/CD process, we generate a self-signed certificate. This is placed into the container and then used by nginx. The nginx is configured to only listen on 443 and the only inbound traffic allowed on the security group for the task is from the ALB on port 443.

That brings us nicely to the process we use to deploy this system with Github actions.

 

Deployment with Github Actions

GitHub Actions is quickly growing with an abundance of actions capable of fully deploying most types of Amazon products. Amazon has launched a few official actions in their GitHub to aid with deployment to ECS. After examining these it seemed like we might need many different actions in order to:

  • login to ECR
  • configure OIDC AWS credentials
  • deploy a ECS task definition
  • edit/render a ECS task definition

This quickly became another possibility for us to create another open source GitHub Action to unify these actions into a smaller set. We have experience maintaining an open source action for deploying via CodeDeploy, so we set out to build another action for deployment via ECS.

Our goal with our design was to unify the effort between the task definition and deployment. For most common situations the only change during a deployment is swapping out the image property so your established task definition just deploys a different container than last time.

Our designed action excelled at this by automating the process of obtaining the latest task definition, swapping in the recently built container and updating the service with that updated task definition while monitoring the release.

 

Security

As we migrated to the running laravel on ECS, we made quite a few security enhancements. Some of these are possible to implement in legacy systems, some required moving to this type of setup.

Read Only Root Filesystem

We spoke above about the Read Only Root Filesystem option we are utilizing on our containers running in ECS. This is a setting inside the task definition called “readonlyRootFilesystem.” When it is enabled, the root file system of the container operates in read only mode. Nothing can alter the files on that file system. It is immutable. Any volatile files must be specifically mounted in as described above with the various mount options.

This one simple setting bestows two major benefits.

The first benefit is that it really forces an entire organization from devops to developers to embrace cloud-first architecture. Containers are ephemeral, and we really need to know every possible location inside of the application that data might be written. This forces everyone to understand and have a strategy for any file system writing that the application performs. If someone accidentally forgets to take care when performing file system writes, they will immediately fail. This is many times better than hiding on a server only to cause issues in the future.

The second is related to compliance and malicious code. Compliance items such as File Integrity Monitoring or server-side Antivirus Scanning all are attempting to solve the problem of malicious users or software modifying code to perform malicious tasks. File Integrity Monitoring also might be used to prevent cowboy coding changes to software by developers or devops personnel with little-to-no oversight.

Having a Read Only Root Filesystem renders both of these issues moot. File Integrity Monitoring is not needed, the files can’t be changed, it’s not possible. The system is doing one better than alerting about changed files after the fact, the files are immutable by design. Server-side Antivirus Scanning isn’t needed either. A virus can’t get onto a file system that is immutable.

Note: Antivirus Scanning that runs on the server monitoring user uploads or email for viruses obviously is still needed.

Many PHP vulnerabilities revolve around a bad actor getting his or her files onto the file system through some security flaw. This simply is not possible if the file system is read only.

Vulnerability Scanning

When a file system is immutable, that doesn’t mean it can’t have security vulnerabilities in the software that it is running. CVEs are released all the time for software that we want and intend to have on our file system.

Amazon Elastic Container Registry and ECS work hand-in-hand. While containers referenced in task definitions don’t have to be on ECR, CI/CD workflows are so much cleaner when ECR is used. ECR has Enhanced Image scanning. This scanning looks not only at the applications installed in the container, but also at the package dependencies at the application level. This package scanning provides a nice second check beyond the Github Dependabot scanning we already use.

Enhanced Image scanning can run both on push of new containers and continuously. The continuous option means that any future CVEs that are found in your container as time goes by will be found by the scanning. An image that had no vulnerabilities on push will be monitored going forward to make sure no new vulnerabilities come out that affect it.

NAT Gateway

Using a private subnet with a NAT gateway isn’t something specific to ECS. It’s just good practice in general.

In our VPCs we always create at least two subnets. The first is public. That one will contain the items that need publicly facing IPs. This mostly consists of the Application Load Balancer and NAT gateway(s). It has a path to the internet through an Internet Gateway.

The second is private. That will contain everything else. It has a path to the internet through a NAT gateway or gateways. NAT gateways exist in a single availability zone, so for highest availability, we would need a NAT gateway for each individual AZ for each subnet. Usually, outbound internet connectivity is not absolutely critical for all app functionality, so we don’t need to make that many NAT gateways (as each one has a cost).

NAT gateways have quite a few security advantages. When Fargate tasks only have Private IPs, they can only reach out to the Internet. Actors on the Internet have no ability to reach back to them. This prevents accidental ingress due to misconfigured security groups. It also helps with compliance because system controls (a system preventing someone from doing something) are always better than manual or policy based controls whereby a person has to remember not to do something.

 

Monitoring

After moving to ECS, we needed to figure out how to monitor the environment. We had some key events that could happen within ECS that we wanted to monitor.

CPU Usage

We wanted to create alarms on CPU usage for each of our ECS services. These are relatively straightforward with Cloudwatch, as its simply an Alarm on Service in the Cluster:

Of note is we select the Maximum CPU usage. We aren’t interested in the average in terms of telling if a system is overloaded.

Image Scan Alarm

We have to use EventBridge for all the rest of the alarms. It took some trial and error to get these patterns the way we wanted.

The first alarm we want is to know if our Enhanced Scanning is picking up any findings. 

Here is the rule we use:

{
  "source": ["aws.inspector2"],
  "detail-type": ["Inspector2 Finding"],
  "detail": {
	"severity": ["CRITICAL"],
	"status": ["ACTIVE"]
  }
}

We do audits on a regular basis looking at Amazon Inspector for findings. We get a lot of noise if we alert on severities other than critical. If we wanted to alert on HIGH as well, we would create an array such as:

"severity": ["HIGH","CRITICAL"],

Deployment Alarms

The devops team likes to know when deployments occur on production environments. 

Here is the rule we use:

{
  "source": ["aws.ecs"],
  "detail-type": ["ECS Deployment State Change"],
  "resources": ["ARN_OF_PROD_ECS_SERVICE"]
}

We replace the ARN_OF_PROD_ECS_SERVICE with the actual arn. If we wanted to alert on deployments of more services, we could add them to the array. We could remove the resources entirely if we wanted alerts on all deployments.

Service Alarms

We want to know when the state of the ECS service changes in any way that isn’t just informational. Events such as failed deployments will fall into this category.

Here is the rule we use:

{
  "source": ["aws.ecs"],
  "detail-type": ["ECS Service Action"],
  "detail.eventType": [{
	"anything-but": "INFO"
  }]
}

Task Alarms

We want to know if the individual tasks inside services or separately (in the case of migration tasks) exit with a non-success exit code.

Here is the rule we use:

{
  "source": ["aws.ecs"],
  "detail-type": ["ECS Task State Change"],
  "detail": {
	"stopCode": [{
  		"anything-but": "ServiceSchedulerInitiated"
	}],
	"lastStatus": ["STOPPED"],
	"containers": {
  		"exitCode": [{
    			"anything-but": 0
  		}]
	}
  }
}

Notice how we exclude ServiceSchedulerInitiated events. These events are when ECS is making changes that we requested such as deployments.

Conclusion

In conclusion, Sourcetoad’s journey to migrate our Laravel-based projects to AWS ECS has been a testament to both the challenges and triumphs of adapting to cutting-edge cloud services. Through sharing our experience, we aim to illuminate the path for others and highlight that, while the transition to ECS may be demanding, the scalability, security, and performance gains are well worth the effort. We encourage teams to embrace the meticulous planning and execution that such a move entails and hope our insights contribute to smoother migrations for many more Laravel applications into the ECS ecosystem.

The post Laravel on AWS Fargate at Sourcetoad appeared first on Sourcetoad.

]]>
22141
ES6: Constants Can Change https://sourcetoad.com/es6-constants-can-change/ Tue, 21 Feb 2017 17:33:15 +0000 https://www.sourcetoad.com/?p=10990 The post ES6: Constants Can Change appeared first on Sourcetoad.

]]>
I recently made the switch from the archaic JavaScript of yester-year that everyone loves to hate to ECMAScript 6 (otherwise known as ES6). If you’re like me, a person who hates change, then you may enjoy that one of the first things you will come to learn and love about ES6 is that it uses a new type of variable called “const.” Const, in a nutshell, allows you to declare immutable variables that can never be redefined within their scope.

I was stoked when I found out about this, man! I could actually officially declare constant variables. No longer did I need to declare variables and just not do anything with them, hoping I wouldn’t forget to not do anything with them. It was a miracle! Everything was working out beautifully. Then I discovered the truth…

Const is not constant, nor truly immutable. My whole life was a lie!!!!

In most cases, const definitely seems constant.

const sillyString = “this string”;
const numero = 1;
const isImmutable = false;

I can not redefine numero to 9, for example, nor can I change isImmutable to “true.” Either case would bring me a bad time. This is as expected and could easily lead any programmer unfamiliar with the whole understanding of const to believe that the variable is truly immutable; but as it turns out, there are other ways than by definition that a variable can be changed. Const plays by a looser definition of immutability and allows for those other changes.

Take this example:

const paradigm = {};

We have a new object, and it happens to be empty. You can’t redefine paradigm to be an object with a bunch of pre-defined properties, but you can give that object new properties.

paradigm.shifted = true; is a valid way to change what we once thought to be an immutable JavaScript variable. Even if you were to define paradigm with its ‘shifted’ property set to true within those brackets, you could still change the value of paradigm.shifted. A similar scenario can be made with const arrays. The array can be given new elements, as well as have elements removed and changed.

This can become a very tragic thing, when our understanding of constants shatters beneath our footing and turns out to be not so constant after all. However, there is a better way to explain and understand the confusing being-ness of const so that it is at least palatable. And no, it is not because it is JavaScript.

The reason these sorts of variables can be changed is because it is not the variable itself that is changing. The variable paradigm was defined as an object and remains as an object. It can never become some other thing. The same goes for the const array always being the thing that it is. The variables, in that sense, become their own sort-of scoped environment, and the items inside of that scope are the ones that can be added, removed, redefined, and otherwise changed. The const variable is then, in that sense, immutable. Our definition of immutable, however; might not be…

 

The post ES6: Constants Can Change appeared first on Sourcetoad.

]]>
10990
View Models in the Yii Framework https://sourcetoad.com/view-models-in-the-yii-framework/ Mon, 30 Jan 2017 15:00:39 +0000 https://www.sourcetoad.com/?p=10940 The post View Models in the Yii Framework appeared first on Sourcetoad.

]]>
Ever since I started using Yii 2, I felt that the default way of passing data from controllers to views was not ideal, although I could not immediately say why. I kept doing it the way everyone does, thinking that I just need to give myself more time to get used to the framework.

If you used Yii 2 or even Yii 1, you have probably seen plenty of code that looks similar to the following:

# controllers/MovieController.php
public function actionCreate() {
    // ...
    return $this-&gt;render('create', [
        'model' =&gt; $model,
        'directors' =&gt; $directors,
        'actors' =&gt; $actors,
        'assignedDirectors' =&gt; $assignedDirectors,
        'assignedActors' =&gt; $assignedActors,
    ]);
}
# views/movie/create.php
/**
 * @var models\Movie $model
 * @var models\Director[] $directors
 * @var models\Actor[] $actors
 * @var models\Director[] $assignedDirectors
 * @var models\Actor[] $assignedActors
 */

echo $this-&gt;render('_form', [
    'model' =&gt; $model,
    'directors' =&gt; $directors,
    'actors' =&gt; $actors,
    'assignedDirectors' =&gt; $assignedDirectors,
    'assignedActors' =&gt; $assignedActors,
]);
# views/movie/_form.php
/**
 * @var models\Movie $model
 * @var models\Director[] $directors
 * @var models\Actor[] $actors
 * @var models\Director[] $assignedDirectors
 * @var models\Actor[] $assignedActors
 */

# views/movie/_form.php
&lt;?php 
//...
echo $form-&gt;field($model, 'name')-&gt;textInput();
// ...

In our example, we have a controller method that passes five items to the “create” view, which in turn passes them to the “_form” view, which is a subview of both “create” and “edit” actions. On the surface, there is nothing wrong with the above code. After all, it is idiomatic Yii code, but if we take a closer look at it, we can notice the following issues:

  1. The names of the variables available inside the view are not easily discoverable. To find out what variables are available, you have to either look at the controller code or add docblocks to the top of the view file, which is a practice that is recommended by the authors of the framework. However, if your views pass more data to subviews, you have to duplicate the docblocks in subviews.
  2. The IDEs cannot offer refactoring support. Using the above example, assume that we want to clarify the purpose of “actors” and “directors” and rename them to “availableActors” and “availableDirectors”. Without IDE support, that would require manual changes to the controller and both views, but having to make a change in more than one place is a strike against maintainability.

Enter View Model

In the statically typed world of ASP.NET MVC, data is often passed from controllers to views using objects called view models. A view model is a class that represents the data needed by a view, and typically consists exclusively of public properties. Because the view model is a class, we can take advantage of autocompletion and, if we use docblocks for the view model’s properties, type hinting in views. As PHP matures and new features are influenced by strongly typed languages, I think it’s a good time to experiment with ideas borrowed from those languages. Let’s see how implementing view models can help us improve the maintainability of Yii applications.

First, let’s define a simple view model class:

&lt;?php

namespace app\viewmodels\hello;

class HelloViewModel
{
    /**
     * @var string
     */
    public $message;
}

Now, let’s instantiate the view model in a controller, assign a value to the $message property, and finally pass the view model instance to a view:

&lt;?php

namespace app\controllers;

use app\viewmodels\hello\HelloViewModel;
use yii\web\Controller;

class HelloController extends Controller
{
    public function actionHello()
    {
        $vm = new HelloViewModel();
        $vm-&gt;message = 'Hello, World!';
        return $this-&gt;render('hello', ['vm' =&gt; $vm]);
    }
}

In our view, we need to add just one docblock to enable autocompletion in the IDE.

&lt;?php

/* @var \app\viewmodels\hello\HelloViewModel $vm */

use yii\helpers\Html;

?&gt;

&lt;h1&gt;&lt;?= Html::encode($vm-&gt;message) ?&gt;&lt;/h1&gt;

Benefits

As you can see, we made the following improvements:

  1. We need only one additional docblock at the top of each view file, no matter how many properties are defined in the view model.
  2. We can enjoy fantastic IDE support for renaming properties within the view models. In addition, the data available in views is easily discoverable via autocompleting $vm->
  3. If we use subviews, all we need to pass to them via the render() method is the view model object.

Refactoring to View Models

Let’s return to the opening example. If we refactor it to use view models, the resulting code may look similar to the following:

# viewmodels/movie/MovieFormViewModel.php
class MovieFormViewModel
{
    /**
     * @var \models\Movie
     */
    public $model;

    /**
     * @var \models\Director[]
     */
    public $directors;

    /**
     * @var \models\Actor[]
     */
    public $actors;

    /**
     * @var \models\Director[]
     */
    public $assignedDirectors;

    /**
     * @var \models\Actor[]
     */
    public $assignedActors;
}
# controllers/MovieController.php
public function actionCreate() {
    // ...
    $vm = new MovieFormViewModel();
    $vm-&gt;model = $model,
    $vm-&gt;directors = $directors;
    $vm-&gt;actors = $actors;
    $vm-&gt;assignedDirectors = $assignedDirectors;
    $vm-&gt;assignedActors = $assignedActors;
    return $this-&gt;render('create', ['vm' =&gt; $vm]);
}
# views/movie/create.php
/**
 * @var \viewmodels\movie\MovieFormViewModel $vm
 */

echo $this-&gt;render('_form', ['vm' =&gt; $vm]);</pre>
<pre class="font-size-enable:false lang:php decode:true "># views/movie/_form.php
/**
 * @var \viewmodels\movie\MovieFormViewModel $vm
 */

//...
echo $form-&gt;field($vm-&gt;model, 'name')-&gt;textInput(['maxlength' =&gt; true]);
// ...

Now that we replaced the data array passed to a view with an object, not only did we eliminate a lot of duplicate code, but — with a little assistance from our IDE — we can very easily rename the data elements in the view. My IDE already saves me a lot of time, but to finally have full IDE support in view files is simply amazing.

The post View Models in the Yii Framework appeared first on Sourcetoad.

]]>
10940
Adding Personal Flair to Your Code https://sourcetoad.com/adding-personal-flare-to-your-code/ Mon, 09 Jan 2017 21:43:47 +0000 https://www.sourcetoad.com/?p=10764 The post Adding Personal Flair to Your Code appeared first on Sourcetoad.

]]>
I’ve been thinking recently about the naming conventions I use while coding, and I noticed I tend to follow pretty strict naming standards so I don’t have to put much thought into picking a name. While standards are usually good for readability, too many standards makes code seem robotic. Since (most) humans are not robots, this can actually have a negative effect on readability. Furthermore, it may save a bit of time in the short term to just follow a standard so you don’t have to think about how you’re naming variables or functions, but it seems to me that limiting the amount of thought you put into your code isn’t the best way to maximize your efficiency as a programmer.

Before learning about code style standards and naming conventions, I often wrote code more like a story than a list of instructions. Okay, so it was more like a blob of text with no particular rhyme or reason to it, but occasionally I would get a bit creative with variable names or parameters, and it was typically easier to look back at that code and recall how it worked. Not because it was any less of a mess, but because it was more engaging to read through. Some standards, like consistent indentation or camel/snake case variable names, should take precedence over personalization. But if seeing a punny variable name keeps you from spacing out while reading through code, either because you remember having fun writing it or because you enjoy reading it, I think it’s worth having.

Finding a way to make your code fun to read for you and others is also a good way of keeping you on your toes while you’re writing it. I think most programmers would agree that writing code is not something you should be able to phone it in on. It can be easy to zone out when you’re doing something monotonous, but you can’t write effective code if you’re not thinking about it when you write it. Suppose you were writing software that writes software. In order to make this software efficient, you would probably want it to write code that is standardized in every way possible. It can be tough as a programmer to remember that people aren’t software, but people are not software. So while a computer doesn’t mind parsing through monotonous, over-standardized code, (most) programmers will have a hard time staying focused when reading that same code.

Maybe you’re just as happy writing code that’s straightforward and minimal, and that just comes down to personal preference. If you can make writing code more enjoyable without negatively affecting readability or performance, why wouldn’t you? I’m not saying every variable name should be some elaborate, well thought-out joke, but if you can make coding fun, you should. Life’s too short to write dull code!

The post Adding Personal Flair to Your Code appeared first on Sourcetoad.

]]>
10764
Magic Methods Within Yii2 https://sourcetoad.com/magic-methods-within-yii2/ Mon, 14 Nov 2016 15:45:01 +0000 https://www.sourcetoad.com/?p=10444 The post Magic Methods Within Yii2 appeared first on Sourcetoad.

]]>
Yii and Yii2 utilize various magic methods within the framework, the implementation of which can be found in the Object class, which most of the framework extends off of.  These magic methods __get, __set, __isset, __unset, and __call enforce a set of standards that are constant throughout the application. Understanding how these magic methods are written is important to understand how to interact with objects within the Yii framework.

The __get method within Yii2 performs several checks when it is called. First, it generates a variable that is the combination of the name of the property being called and appending to get to it. A check if that method exists is then run, if true the getter is executed. If not, it checks to see if a setter exists for the property. If so, an error is thrown indicating the attempt to access a write-only property. If all else fails, it results in a standard, getting unknown property error.

/**
 * Returns the value of an object property.
 *
 * Do not call this method directly as it is a PHP magic method that
 * will be implicitly called when executing `$value = $object-&gt;property;`.
 * @param string $name the property name
 * @return mixed the property value
 * @throws UnknownPropertyException if the property is not defined
 * @throws InvalidCallException if the property is write-only
 * @see __set()
 */
public function __get($name)
{
    $getter = 'get' . $name;
    if (method_exists($this, $getter)) {
        return $this-&gt;$getter();
    } elseif (method_exists($this, 'set' . $name)) {
        throw new InvalidCallException('Getting write-only property: ' . get_class($this) . '::' . $name);
    } else {
        throw new UnknownPropertyException('Getting unknown property: ' . get_class($this) . '::' . $name);
    }
}

Getters allow for the manipulation or creation of variables data on the fly. If you have two variables first_name and last_name in an object, you can create a method like the following

public function getFullName(){
    return $this-&gt;first_name . ' ' . $this-&gt;last_name;
}

Which when called like $foo->fullName will combine the two variables into one for easy display and concise code.

It is important to note that when utilizing these methods within a model object, where data is pulled from a data source, the get magic method is called before the variable generated by the data source is accessed.

Example:

class foo extends Object
{
    
    public function getBar() {
        return $this-&gt;bar;
    }   
}</pre>
<pre class="lang:default decode:true ">&lt;?= $foo-&gt;bar ?&gt;

In the above, even though it is unnecessary to have the getBar method to get the variable, it is still executed. This is because within the base __get call, a check is made with:

$getter = 'get' . $name;
if (method_exists($this, $getter)) {

And since method_exists is a case-insensitive function, it returns true resulting in its execution. It is usually unwise to overwrite the normal accessing of a model’s data attributes this way as it can lead to unintended manipulation of the data that is only visible when the code is executed.

Example:

class foo extends Object
{
    
    public function getBar() {
        return $this-&gt;bar;
    }   
}</pre>
<pre class="lang:php decode:true" title="Unintended Consequences">&lt;?= $foo-&gt;bar ?&gt;

As you can see above, the bar is now different than what is in the data source, which unless you are looking at both at the same time, you wouldn’t be able to tell. Tests, when not checking for explicit values, would pass even though they should fail. If this is unavoidable for whatever reason, the data source should be updated by triggering the save() method within the same execution the getBar was called.

The post Magic Methods Within Yii2 appeared first on Sourcetoad.

]]>
10444
Debugging PHP: Save Time with Xdebug’s Remote Autostart https://sourcetoad.com/debugging-php-save-time-with-xdebugs-remote-autostart/ Mon, 10 Oct 2016 15:00:08 +0000 https://www.sourcetoad.com/?p=10320 The post Debugging PHP: Save Time with Xdebug’s Remote Autostart appeared first on Sourcetoad.

]]>
Whenever I find myself working with an unfamiliar PHP codebase, Xdebug becomes one of my most important tools. Xdebug is a PHP extension that allows you to debug code by stepping through it in an editor or IDE. In my experience, stepping through the code is a great way to understand the flow of the application more easily and accelerate the process of learning the codebase.

XDEBUG_SESSION_START

By default, even after installing the Xdebug extension, you have to take further action to start a remote debugging session. This is typically done by appending a query string parameter called XDEBUG_SESSION_START to the URL. As an alternative, you can also send XDEBUG_SESSION_START with POST data. Both methods create a special cookie in your browser that instructs Xdebug to start a remote debugging session automatically, so you do not have to include the XDEBUG_SESSION_START parameter on subsequent requests. That cookie is typically valid for one hour, but the maximum age is controlled by the xdebug.remote_cookie_expire_time setting, which can be changed in the php.ini config file.

Browser Extensions

One way to simplify the process of starting a remote debugging session is to use a special browser extension such as Xdebug helper for Chrome. Such extensions place the XDEBUG_SESSION cookie in your browser and they typically make it valid for much longer than an hour. They also allow you to add or remove the cookie with a push of a button. Similar extensions are available for all major browsers.

xdebug-session-started-using-xdebug-helper

Limitations

While all of the above methods of starting a remote debugging session are valid, they are not without limitations.

If you are developing a single-page application and your frontend and backend are hosted on different domains, setting the Xdebug cookie using an extension may not do you much good, because the extension will set the cookie on the frontend domain instead of the backend domain where you actually need it.

As another example, if you are building an API, and especially a REST API, it may not be practical to send the Xdebug cookie with all requests, but it may not necessarily be any more convenient to add the XDEBUG_SESSION_START parameter to individual requests. If your API is consumed by an Android or iOS app, having to recompile the app is just too time-consuming.

remote_autostart

Luckily, there is a better way of starting a remote debugging session. Xdebug comes with a very powerful feature called remote autostart. To enable it, add the following line to your php.ini config file:

xdebug.remote_autostart = 1

Once you have done so, Xdebug will attempt to start a remote debugging session automatically on every request without a need for special URL parameters, POST data, or cookies. As an added bonus, remote_autostart also works automatically with CLI scripts.

xdebug-session-started-using-xdebug-helper

Tips for PhpStorm Users

If you are a PhpStorm user, I recommend that you assign a shortcut to the “Listen for debugger connections” setting in PhpStorm as that shortcut will most likely become your replacement for all Xdebug browser extensions. And while you are adjusting PhpStorm settings, take a look at the “Break at first line in PHP scripts” feature and consider disabling it to save yourself the trouble of having to resume every remote debugging session. Simply relying on breakpoints set in the code will allow you to keep the “Listen for debugger connections” setting on for long periods of time.

Conclusion

For me, the most important advantage of remote_autostart over other methods is that it increases my productivity. I never have to manually edit any URLs or install browser extensions, regardless of whether I am debugging a legacy website or a brand new single-page application. I am sure that once you become accustomed to remote_autostart, you will wonder how you ever debugged PHP code without it.

Do you have a topic you’d like us to cover? Email your suggestion to [email protected].

The post Debugging PHP: Save Time with Xdebug’s Remote Autostart appeared first on Sourcetoad.

]]>
10320
JavaScript Decorators: True Power In ES7 https://sourcetoad.com/javascript-decorators-true-power-es7/ Wed, 06 Jul 2016 17:48:02 +0000 https://www.sourcetoad.com/?p=2689 The post JavaScript Decorators: True Power In ES7 appeared first on Sourcetoad.

]]>
JavaScript can be pretty annoying at times. You would think that with classes implemented in ES6, this would be solved. For me, the most annoying part of programming is typing. So every time I need to call a class member in React, I have to bind the function to “this.” However, with ES7 decorators, its possible to use a library function (autobind-decorator) to do automatic binding of “this” to all class members or individual member functions. But if you’re using Babel 6, which is the most current version, you have to use (babel-plugin-transform-decorators-legacy), since ES7 decorators have not actually been finalized.

// example without autobind:

class Toad extends Component {
    contructor(props) {
        super(props);
        this.state = {
            distance: 1,
        }
        // have to bind this context to jump function
        this.jump = this.jump.bind(this);
    }
    jump() {
        alert("jumped " + this.state.distance);
    }
    jump2() {
        alert("jumped " + this.state.distance * 2);
    }
    render() {
        return (
            <div>
                <div onClick={this.jump}></div>
                {/* bind jump2 function */}
                <div onClick={this.jump2.bind(this)}></div>
            </div>
        )
    }
}
// example without autobind:
import autobind from 'autobind-decorator'
@autobind
class Autotoad extends Component {
    contructor(props) {
        super(props);
        this.state = {
            distance: 1,
        }
    }
    jump() {
        alert("jumped " + this.state.distance);
    }
    render() {
        return (
            <div onClick={this.jump}>
            </div>
        )
    }
}

At least when using arrow function syntax, the context of “this” is preserved, so code is no longer filled with references to “this,” “in,” “that,” or “self.”

So what are decorators? Basically, functions that wrap around other functions. Decorators can be more powerful than the actual language. They can bypass the language runtime and translate into statically compiled code. For example, Numba compiles functions to machine-code based on which functions have the decorator.

Should languages be changeable by the user based on preferences? What’s interesting about transpiling is that personal preferences can be satisfied instead of strict compatibility with standards. What I don’t understand is why it is not more common around older languages like C++.

The post JavaScript Decorators: True Power In ES7 appeared first on Sourcetoad.

]]>
9766
An In-Depth Look at Regular Expressions Part 2: Regular Languages https://sourcetoad.com/depth-look-regular-expressions-pt-2/ Tue, 21 Jun 2016 14:35:30 +0000 https://www.sourcetoad.com/?p=2675 The post An In-Depth Look at Regular Expressions Part 2: Regular Languages appeared first on Sourcetoad.

]]>
So we’ve discussed what a language is from a computer science theory viewpoint in part 1 of this series, and we have 2 ways of representing languages:
Set notation: {λ, a, aa, aaa, ...} or {a^n | 0 <= n} for short,
and grammar production rules: {S -> aS | λ}

As I stated in part 1, a regular expression is a representation of not just any language, but a regular language specifically.

Regular languages are just languages with certain restrictions. To avoid getting too confusing and mathy, we’re going to skip over the formal definition of a regular language and jump straight to regular grammars; recall that a grammar is simply a way of representing a language, so a regular (or linear) grammar is a representation of a regular language (just like a regular expression!). So one example of a regular grammar is one in which every production rule is of the form: A -> aB, A -> a, or A -> λ, where A and B are non-terminals, and a is a terminal (this is called a left linear or left regular grammar).

For example:

P = {
S -> aB
B -> bC
C -> c
}

Each production rule in this grammar fits one of the forms above, so it is a regular grammar, and therefore the language represented by this grammar is a regular language.
Note: There are other grammars that represent this same language, such as:

P1 = {
S -> abc
}

or

P2 = {
S -> Bc
B -> Ab
A -> a
}

(For the record, P2 is an example of a right linear grammar)

Depending on the language or library you’re using, regular expressions have a lot of extra features and shortcuts, but a standard regex has only 3 basic operations:
1) Concatination (ab)
2) Alternation (a|b)
3) Kleene Star (a*)

Any other operation can be made from these operations.
The + operator, for example, can be defined as: a+ is equivalent to a(a*)
Another useful operator is the ? operator, which can be defined as: a? is equivalent to (a|)

The post An In-Depth Look at Regular Expressions Part 2: Regular Languages appeared first on Sourcetoad.

]]>
9765
Responsive Content Canvasing With Fixed Ratios https://sourcetoad.com/responsive-content-canvasing/ Mon, 13 Jun 2016 15:59:18 +0000 https://www.sourcetoad.com/?p=2640 The post Responsive Content Canvasing With Fixed Ratios appeared first on Sourcetoad.

]]>
Today I would like to discuss a concept aimed at preparing an area of content that is yet to be loaded. Or, more precisely, pre-determine the height of some content that has not yet been displayed on the screen. A great example of this would be a slideshow system such as Flexslider, which hides it’s content while loading and waits for the document to become ready before then rendering to the view.

Due to the content being hidden on load, there is no way for the document to know what the potential content height is going to be, without defining a set height, which is terrible for responsive design. This may cause issues such as the content briefly loading over top of or offsetting the position of other content areas on the page. While ugly as that may be, there is actually a method to prevent it, with a concept I like to call content canvasing.

This method assumes your content area will be of a fixed ratio such as 2:1, 16:9, etc.; and relies on an unorthodox effect caused by setting an elements’ width to 100% and padding-top to some divisible proportion; say 50%.

Here is an example:

    
    .content-canvas {
        width: 100%;
        padding-top: 50%;
    }

 

That in turn gives us a canvas with a ratio of 2:1 to work with. We now have an element making up the space to account for when our content is finally rendered to the view. Now we just need to have our content fit within the space we have provided. The way we do that is to position our content inside the new canvas element with position absolute and margin-top set to negative the amount of top padding from our parent element, the content-canvas class.

Positioning inner content

    
    .content-canvas {
        width: 100%;
        padding-top: 50%;
    }

    .content {
        position: absolute;
        margin-top: -50%;
    }

    .content img {
        width: 100%;
        height: auto;
    }

 

View the Codepen to follow along:
http://codepen.io/philsanders/pen/NNZgxZ

Conclusion

Although, I did not setup an example with Flexslider slideshow for my sample content, you should be able to see how we have prepared a space for our content and positioned it where it needs to be once it has finished loading. This method could be modified and used with most any type of content so long as you can determine a fixed ratio to work with. And that’s it. I hope you found these tips informative and useful. Thanks for reading and happy coding.

The post Responsive Content Canvasing With Fixed Ratios appeared first on Sourcetoad.

]]>
9763