<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[webthoughts.koderhut.eu - Medium]]></title>
        <description><![CDATA[Exchanging WebDev Ideas - Medium]]></description>
        <link>https://webthoughts.koderhut.eu?source=rss----f05255917723---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 18 Apr 2026 05:41:55 GMT</lastBuildDate>
        <atom:link href="https://webthoughts.koderhut.eu/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Docker Tips&Tricks]]></title>
            <link>https://webthoughts.koderhut.eu/docker-tips-tricks-748cf6281bc3?source=rss----f05255917723---4</link>
            <guid isPermaLink="false">https://medium.com/p/748cf6281bc3</guid>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Denis-Florin Rendler]]></dc:creator>
            <pubDate>Mon, 13 May 2019 13:47:06 GMT</pubDate>
            <atom:updated>2019-05-13T13:47:54.422Z</atom:updated>
            <content:encoded><![CDATA[<p>In this article I just want to share a few tips &amp; tricks that have helped me with different Docker related problems in the last 2+ years since I started working with Docker.</p><h3>Tip #1: docker build</h3><p>a) Is there a different between <strong>CMD and ENTRYPOINT</strong> ?</p><p>One confusing aspect that I frequently encountered while writing a Dockerfile was the similarity between the CMD and ENTRYPOINT instructions. The documentation has greatly improved regarding this difference, but it can still be confusing for new comers to the Docker environment.</p><p>The short version:</p><p>- ENTRYPOINT is meant to configure a command that will always be executed when running a docker run myimage command. This instruction can only be overwritten by using the --entrypoint CLI parameter for thedocker run command</p><p>- CMD is meant to configure a set of default parameters for the command set up with the ENTRYPOINTinstruction. These parameters can be overridden by the extra parameters given to the docker run command immediately after the image name.</p><p>The confusion is created by the fact that both instructions can be used mostly in the same way: to configure an execution command for the image.</p><p>For example:</p><ul><li>run the following command with code below:</li></ul><p>docker build -t myimage -f Dockerfile .</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/522bfc147800ac6853fca3153e29bf7e/href">https://medium.com/media/522bfc147800ac6853fca3153e29bf7e/href</a></iframe><ul><li>running a docker run --rm myimage will output Welcome to Docker World</li><li>running a docker run --rm myimage my world will output: Welcome to my world</li></ul><p>As you may have spotted the command remained the same, echo, while the parameters used were those given at the command-line.</p><p>Now update the Dockerfile with the changes below and rerun the build command:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f1eef45546519fa79562228f356fea9f/href">https://medium.com/media/f1eef45546519fa79562228f356fea9f/href</a></iframe><ul><li>running docker run --rm myimage will have the same output as in the previous run</li><li>running docker run --rm myimage my world will output the following error now:</li></ul><blockquote>“docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused “exec: \”my\”: executable file not found in $PATH”: unknown.”</blockquote><p>This is because the parameters we added to the run command have overridden the CMD instruction in the Dockerfile. In this case Docker tried to execute an app called my with the parameter world.</p><p>There is another small trick that is important to know when using these instructions and relying on environment variables.</p><p>Both instructions have 2 forms:</p><ul><li>ENTRYPOINT/CMD [command, param1, param2] which will execute the command directly as PID 1 inside the container</li><li>ENTRYPOINT/CMD command param1 param2 which will execute the command listed as a sub-process of the shell being used by the container</li></ul><p>This is an important distinction because running as a sub-process of the shell it will give us access to the environmental variables that the shell is usually maintaining.</p><p>There is also a third form for CMD, because as we mentioned previously this instruction should mainly be used as a default list of parameters:</p><ul><li>CMD [param1, param2]</li></ul><p>b) Deferring instruction execution with <strong>ONBUILD</strong></p><p>ONBUILD comes very useful when we need to create base images that could be used for multiple projects by allowing us to defer the execution of the configured instruction to the downstream images.</p><p>It works by configuring trigger instructions to be run when building a later image. These instructions will be run as if they were added immediately after the FROM instruction in the current Dockerfile.</p><p>As an exercise, consider the following scenario: a DevOps or security team creates a base Docker image that copies the project’s source code to the image and then runs your preferred package manager to automatically install the project’s dependencies — this is a basic use case but this instruction opens up a whole lot more possibilities.</p><p>Then, this image could be used for many other projects using the same package manager without requiring changes for each project. All the project’s customization could be done in the project’s ownDockerfile.This will make it easier to maintain base project images that could be used even across environments — dev, staging, prod.</p><p>For example, below is a base image for installing a PHP project’s dependencies using Composer.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/baaceed2aafd864a5d251df5a64e7179/href">https://medium.com/media/baaceed2aafd864a5d251df5a64e7179/href</a></iframe><p>Add the code above in a file called Dockerfile.base. Then build the composer-baseimage by running this command:</p><p>docker build -f Dockerfile.base -t composer-base .</p><p><em>PS: For those of you unfamiliar with </em><a href="https://getcomposer.org"><em>Composer,</em></a><em> just add the code in the next block to a file called </em><em>composer.json stored in the same folder as the </em><em>Dockerfile</em></p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5086cf559374a807d7f3451caf712dea/href">https://medium.com/media/5086cf559374a807d7f3451caf712dea/href</a></iframe><p>In another file, called Dockerfile.project, add the following instructions:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/303914763401f779e23db5437c995b62/href">https://medium.com/media/303914763401f779e23db5437c995b62/href</a></iframe><p>Build the image by running this command:</p><p>docker build -f Dockerfile.project -t my-project .</p><p>Now, when running the build command — docker run --rm my-project — Docker will create a new container and automatically start running the Composer application which will immediately start installing the project’s dependencies.</p><p>Using the above files it should display the following output:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/517/1*kbdEAczTh6XwntKMMzc_Cg.png" /></figure><p>c) The <strong>ARG </strong>instruction</p><p>One way to make Dockerfile files easier to maintain and reuse is to use arguments. Docker allows for arguments to be added to the build process by using the ARG instruction, which creates a variable inside the build scope containing the value of the argument passed through the --build-arg parameter.</p><p>To make it easier to understand how ARG works think of the instructions before a FROM definition as being part of the global scope, while instructions after a FROMdefinition as being part of the build scope.</p><p>For this exercise, let us use the following Dockerfile:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/cfd2561bad82481de75c230eb9a463b4/href">https://medium.com/media/cfd2561bad82481de75c230eb9a463b4/href</a></iframe><p>Looking at the instruction on line 1, what Docker does is to define a <strong>global scope</strong> key called BASE_TAG which will be used to configure the version tag used for the base image.</p><p>The ARG instruction can accept a default fallback value that will be used in case the command line parameter is missing. This is defined as a value for the key used by the ARG instruction.</p><p>An important thing to remember is that global scope argument will not propagate automatically inside a build scope. There is, however, a small workaround to this rule.</p><p>If we look at line 4 in our Dockerfile we notice that we run an echo command that will not output the BASE_TAG value but a simple blank line. To propagate the argument value inside the build scope we need to redeclare the argument with the same name after the FROMline. If we don’t specify a default value, the default value from the global scope definition will be automatically set.</p><p>d) <strong>Multi-stage build</strong></p><p>Multi-stage builds has been a recent addition, but it is an extremely useful feature allowing for easier management of build scripts and smaller Docker images.</p><p>The first benefit is the fact that there is no longer a need to maintain separate Dockerfile files for each environment. We can prepare the Dockerfile for the production image and then add a new stage where we start from the production image and just add the development tools needed. This is useful because there is no longer the need to build and store the PRODUCTION image separately.</p><p>The smaller images are a consequence of the fact that we can separate the building tools from the application image. As such, we can start from a base image which contains our application’s build tools, run the build tools to build the application, then in a next stage we can start from a production ready image and copy <strong>only</strong> the newly built application.</p><p>Stages, by default, are numbered starting with the first encountered stage as zero, but they can also be statically named using the FROM [image:version] AS stage_tagformat.</p><p>Since examples can offer more context, let’s consider the following Dockerfile:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/bebf863bdbb38197922b5f3a67e3b371/href">https://medium.com/media/bebf863bdbb38197922b5f3a67e3b371/href</a></iframe><p>In order to build the production image we run the following command:</p><p>docker build -f Dockerfile -t prod_image --target=PROD .</p><p>Docker runs the build process starting at the FROM php:7.2-cli as PROD definition up to next one. Since we said that the target is PROD, Docker will stop there and save the layers as a new Docker image.</p><p>To create the development image we run, you guess it, the same command but having --target=DEV. In this case, Docker will build the PROD image then proceed in building the DEV image by using the layers from the PROD build.</p><p>There is a catch, unfortunately. If we wanted to build the STAGING image Docker will build the PROD stage first, then build the DEV stage second and then start building the STAGING stage using the PROD as a base image. The DEV stage layer will be discarded since it is not referenced as being used.</p><p>The first downside is in time taken for the DEV stage to build and the requirements it will have.</p><p>Secondly, if at any point the DEV stage will fail to build the whole process will fail to build.</p><p>This does not seem to be the case with BuildKit, but that is a topic on its own.</p><p>In this cases discussed until now, we used all the layers from the parent build, but the multi-stage build process also allows us to copy only specific files/folders from the previous build.</p><p>This can be achieved by using the --from=[stage_name/stage_id] parameters for the COPY instruction.</p><p>e) <strong>--tag &quot;name:tag&quot;</strong></p><p>The tag parameter is a bit confusing in the context of the docker build command because in almost all the documentation it usually refers to the portion of the name of an image after the semicolon (:), also known as a version tag. The confusing part is the fact that if you use the Docker Hub to store the image the name part needs to be only the user name and the tag refers to the image version. But if you store the image in a private repository the name is required to contain the complete domain name of the registry, including the port number, and the path that this image is stored under.</p><p>For example: an image stored at: registry.example.com:5000/my_project would have the tag as: registry.example.com:5000/my_project/my_image:latest or for a more specific version registry.example.com:5000/my_project/my_image:1.0.0</p><h3>Conclusions</h3><p>I think that wraps it up for this first part. If you find it useful or have any feedback or suggestions please feel free to add them in comments section below.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=748cf6281bc3" width="1" height="1" alt=""><hr><p><a href="https://webthoughts.koderhut.eu/docker-tips-tricks-748cf6281bc3">Docker Tips&amp;Tricks</a> was originally published in <a href="https://webthoughts.koderhut.eu">webthoughts.koderhut.eu</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Testing your infrastructure with InSpec.io]]></title>
            <link>https://webthoughts.koderhut.eu/testing-your-infrastructure-with-inspec-io-120c124c0868?source=rss----f05255917723---4</link>
            <guid isPermaLink="false">https://medium.com/p/120c124c0868</guid>
            <category><![CDATA[inspec]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[infrastructure-as-code]]></category>
            <dc:creator><![CDATA[Denis-Florin Rendler]]></dc:creator>
            <pubDate>Tue, 19 Mar 2019 11:42:17 GMT</pubDate>
            <atom:updated>2019-03-19T11:39:56.665Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<p>InSpec is a run-time framework and rule language used to specify compliance, security, and policy requirements for a system. It was built by the team behind Chef.io and used initially for testing Chef cookbooks before pushing them to production. According to its history, InSpec has started as a ServerSpec plugin, but it has evolved into its own tool starting with version 2.</p><h3><strong>Structuring the tests</strong></h3><p>There are two documented ways of structuring your InSpec tests: Chef cookbook way, where the tests are embedded into the Chef cookbook and InSpec’s compliance profile way. I like to follow the second way because of two main reasons: I mostly write Ansible and SaltStack roles/states and because I like to reuse my tests to check the infrastructure state both before and after running any automation.<br>I’ll use as an example a profile I wrote for testing an Ansible role that installs and configures Samba.</p><p>The folder structure looks like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/614/1*D9GPVDDl5LqKudfXke7ugA.png" /><figcaption>Profile structure example</figcaption></figure><h3><strong>Creating your first profile</strong></h3><p>Getting started with a profile is quite easy with InSpec. Using the init profile command, InSpec will create a minimal profile along with a basic example.</p><p>For example, running:</p><p>$ inspec init profile samba</p><p>InSpec created a folder named samba in the current directory and in that folder it created a basic structure which we can update if you want to follow along with this article.</p><h3>Anatomy of an InSpec.io Profile</h3><p>An InSpec profile is a collection of assertions grouped as compliance controls in a standalone structure with its own distribution and execution flow.</p><p>Tests are written in InSpec DSL, an extension of Ruby DSL, optimized for writing audit controls, security and policy requirements.</p><p>The only two requirements of a profile are to have a controls folder where the tests will live and a file called inspec.yml in the profile’s root where we will store the metadata about the profile.</p><h4><strong>inspec.yml</strong></h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b9b3f63a4fa0d45cb7fded7798c13990/href">https://medium.com/media/b9b3f63a4fa0d45cb7fded7798c13990/href</a></iframe><p>The inspec.yml file is a straightforward YAML file where the only requirement is to have a key called name. This key will be used as the profile identifier by InSpec when generating the reports. There are a few more useful tags, so make sure to check out the docs. Link is down below.</p><h4>Writing infrastructure tests</h4><p>The tests will be located in the controls/folder. There you can either put everything in a single .rbfile or separate them in different files based on functionality.</p><p>Each .rb file will usually contain one or more control blocks.</p><p>Having controls in separate files offers the most flexibility when I need to run only specific tests.</p><p>For example, if I just want to check if the Samba server is installed I will only run the control file called controls/samba.rb</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0aac7a2436b07eda1b70b46c2bde4c91/href">https://medium.com/media/0aac7a2436b07eda1b70b46c2bde4c91/href</a></iframe><p>As you can see from the code above, in this file we have one control structure with three tests: one that checks that the samba package should be installed and two others that verify that the smbd, samba daemon, is enabled and running.</p><p>The package and service are InSpec DSL resources. At the moment of writing, InSpec has 80+ such resources covering the vast majority of audit needs. We also have the possibility of adding custom resources that live in the libraries/folder and which will be shipped along with the profile.</p><h4>Dissecting an InSpec control block</h4><p>A control block means a <em>‘regulatory control, recommendation, or requirement</em>’, grouping together related assertions about the state of the system along with the documentation explaining them.</p><p>It only requires an identifier. And the recommendation is to keep it simple because it will be used to map the test results to the control identified in the report.</p><p>Inside a control block there are a few tags of special interest:</p><ul><li>impact: describing the impact the control has over the system. This can be used by reporting tools or by Chef’s own compliance tool.</li><li>desc: a more detailed view of the purpose of the control</li><li>tag: one or more tags that can be added to more easily group controls in reports</li><li>ref: these keys can be used to provide additional information about the control. It can serve to add a link to a documentation page describing in more detail the assertions made by the control block</li><li>describe: one or more describe blocks using a resource and grouping one or more tests</li><li>a test is an individual assertion about the state of the resource or one of its properties. All tests begin with the keyword it or itsand are grouped within a describe block.</li></ul><p>With the exception of the describe blocks everything in the control block is optional. Even the control block itself is optional. But then if we don’t use a control block we loose other useful functionality.</p><h3>Deploying and using InSpec tests</h3><p>InSpec has three modes of operation: locally, remotely through ssh and winrm for Windows environments. It can also be used to test the environment inside a Docker container using the docker backend.</p><p>To run the tests you need to use the execcommand as in the example below:</p><p>$ inspec exec -b local samba</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/624/1*c5ejN3K-F0iyXyn3pQ5zGQ.png" /><figcaption>Output of tests before installing the Samba app</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/627/1*PP-weSZyjYEvPFAIhXJ67Q.png" /><figcaption>Tests passing after installing the Samba app</figcaption></figure><p>Conclusion</p><p>InSpec is a nice small utility application that can be used in variety of scenarios ranging from compliance tests to using TDD in dealing with the infrastructure.</p><p>One other use case that quickly comes to mind is using it as part of the support tools for an application. Checking if the environment where the application conforms with the manufacturer’s specs and even if the application itself has all the components that it needs.</p><p>I will try to cover most of these cases in later articles, so stay tuned for more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=120c124c0868" width="1" height="1" alt=""><hr><p><a href="https://webthoughts.koderhut.eu/testing-your-infrastructure-with-inspec-io-120c124c0868">Testing your infrastructure with InSpec.io</a> was originally published in <a href="https://webthoughts.koderhut.eu">webthoughts.koderhut.eu</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Serving multiple web apps using HAProxy and Docker containers]]></title>
            <link>https://webthoughts.koderhut.eu/serving-multiple-web-apps-using-haproxy-and-docker-containers-c2ca5e52a748?source=rss----f05255917723---4</link>
            <guid isPermaLink="false">https://medium.com/p/c2ca5e52a748</guid>
            <category><![CDATA[haproxy]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[proxy]]></category>
            <dc:creator><![CDATA[Denis-Florin Rendler]]></dc:creator>
            <pubDate>Mon, 20 Nov 2017 06:26:00 GMT</pubDate>
            <atom:updated>2017-11-20T06:26:00.685Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Disclaimer: The flow presented in this article can be used in a production environment, but IT DOES NOT address any security measures that will normally be needed to run it in a production environment.</em></p><p>Since I first started working with Docker, around 2 years ago, I found it very useful because of its speed booting up a complete development environment. It also only takes a few seconds to switch between projects which convinced me to use it as a default environment for working.</p><p>Nevertheless, one thing that I found bothering me was that I had to shutdown a container mapping a host specific port when I needed another container using that same port. The most common example is a web-server container. I always needed to stop the Nginx container for one project whenever I switched to a different project. I know that with Docker’ speed this only requires a couple of commands. But it’s still a couple of clicks that get annoying fast when I need to do this a few times during the course of a day.</p><p>So I started looking around for a solution. And this is what I came up with.</p><h4>Initial Environment Configuration</h4><p>I started looking at Nginx to be used as a proxy, but it requires a lot of configuration to get started with a new project. Next on the list was HAProxy. Its configuration is quite clean and easy to understand. It also, only requires a couple of lines to route a new project.</p><p>Below you can find the steps to get you started with this setup.</p><p>First we need to create the HAProxy container. We do so by running:</p><pre>$ docker run -d -v /etc/haproxy:/usr/local/etc/haproxy:ro -p 80:80 \    <br>        -p 443:443 --name web-gateway haproxy</pre><p>This command will pull the latest HAProxy image and create a container from it. With this command Docker will also map a host folder, where we will keep the HAProxy configuration, and the HTTP(S) ports to the container. Bind mount-ing the configuration from the host will allow us to edit the configuration from the host and the container will just reload it.</p><p>Now that we configured the entry point to our server it is time to configure the web applications that we want to serve.</p><p><em>Note: for this article I will use two simple Nginx basic setups, but the exact same steps can be applied to any Docker-ized network application.</em></p><p>Next we will create three containers: one serving as a default server and two simulating serving two different apps.</p><pre>$ docker run -d -v /var/www/default:/usr/share/nginx/html:ro \<br>    --name default-backend nginx</pre><pre>$ docker run -d -v /var/www/web1:/usr/share/nginx/html:ro \<br>    --name default-backend nginx</pre><pre>$ docker run -d -v /var/www/web2:/usr/share/nginx/html:ro \<br>    --name default-backend nginx</pre><p>Probably the first question you might have right now is how will the three new containers communicate with the HAProxy container since we didn’t map any host port? The answer is very simple: we will use Docker’s container networking.</p><p>Depending on how you would like to connect the apps you may choose to create separate networks for each container or reuse a network for several containers. In the later case, the apps will be aware of each other and accessible through the container’ name or IP.</p><p>For the purpose of this article I will create a separate network for each container so that we will keep the encapsulation features provided by Docker.</p><p>Let’s create the default server network:</p><pre>$ docker network create —-attachable -d bridge default-webapp</pre><p>We are using the ‘bridge’ driver because we will be using only the current machine. If you have a multiple machine setup you can use the ‘overlay’ driver. For more info, check-out the <a href="https://docs.docker.com/engine/reference/commandline/network_create/">Docker documentation page</a>.<br>The ‘attachable’ option will allow us to manually attach the network to our containers, which we will need to do later.<br>Finally, we provide a name for the network through the last parameter of the command.</p><p>Using the same command go ahead and create the networks for the other 2 containers using the ‘web1’ and ‘web2’ names.</p><p>Now that we configured the application containers, the HAProxy container as well as the networks let’s glue them together.</p><p>Run the following commands to connect the HAProxy container to the default Nginx server:</p><pre>$ docker network connect default-webapp web-gateway <br>$ docker network connect default-webapp default-backend</pre><p>Now the two containers are accessible from each container using the container name. This gives us a flexible environment to work with because we no longer need to keep track of IPs and we can swap containers without changing any configuration. If, for example, you want to test your app with Apache or httpd just remove the Nginx container and create the new one using the same container name. Attach it to the network and you are done. The HAProxy will not need any other configuration change to proxy the connection to your new container.</p><p>Do the same for the other two containers by using the above two commands, but be careful to use the proper network and container names :)</p><p>Now that we connected the containers together it is time to configure HAProxy.</p><p>On the host create a file called `haproxy.cfg` under `/etc/haproxy` and paste the following configuration:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/1426779143f4166c38a064948dbc3ea7/href">https://medium.com/media/1426779143f4166c38a064948dbc3ea7/href</a></iframe><p>The sections that we will be focusing are the ‘frontend’ and ‘backend’. <br>The ‘frontend’ section declares a mapping of IP and port where HAProxy will listen to for connections. In our config we are configuring HAProxy to listen on all our IPv4 addresses using port 80 and 443.</p><p>The ‘default-backend’ option configures a default backend server where a request will be sent in case there are no other ACL’s matching.</p><p>The ACL’s (Access Control List) are HAProxy’s way for identifying where a request should be routed to in order to get a response. We will need to create a new ACL entry for each new application/domain we want to serve.</p><p>The ‘backend’ sections configure the servers that should answer to a request. Here, we can add as many containers we would like to respond to a request to our application.</p><p>As we can see, because we are using named Docker containers we can simply use the container name to route our request from the HAProxy to the application container.</p><p>Now restart the HAProxy container by running:</p><pre>$ docker restart web-gateway</pre><p>That’s it! Now, whenever we start a new project, we simply add a new network, connect HAProxy to it and add a new ACL line and backend server configuration.</p><h4><strong>Final thoughts</strong></h4><p>Using named Docker containers and container networking we can use a more descriptive and reusable HAProxy configuration. By using FQDN for containers we can have the same HAProxy configuration moved to a network where we use bare-bone servers and/or VMs instead of Docker containers. We can also use such servers along side containers allowing us to have quite a flexible environment.</p><p>Although this article focuses on a development environment this setup can be used in a production setup as well, <strong>as long as all the necessary security measures are addressed.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c2ca5e52a748" width="1" height="1" alt=""><hr><p><a href="https://webthoughts.koderhut.eu/serving-multiple-web-apps-using-haproxy-and-docker-containers-c2ca5e52a748">Serving multiple web apps using HAProxy and Docker containers</a> was originally published in <a href="https://webthoughts.koderhut.eu">webthoughts.koderhut.eu</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Overriding parent dependencies in Symfony 3]]></title>
            <link>https://webthoughts.koderhut.eu/overriding-parent-dependencies-in-symfony-3-66d04c1ac91a?source=rss----f05255917723---4</link>
            <guid isPermaLink="false">https://medium.com/p/66d04c1ac91a</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[php]]></category>
            <category><![CDATA[symfony]]></category>
            <dc:creator><![CDATA[Denis-Florin Rendler]]></dc:creator>
            <pubDate>Wed, 21 Dec 2016 19:00:25 GMT</pubDate>
            <atom:updated>2016-12-25T08:44:17.211Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/450/1*J9_J-z9o9xbBXfVahXkIGg.jpeg" /></figure><p>One of the features I like when working with multiple inheriting services in a Symfony project is that it allows a developer to define parent and derived services. That means that two services can share the service configuration definition inside the Symfony DependencyInjection Container (aka DIC).</p><p>Instead of copying the same configuration lines for each related service, a developer can define a parent service and then just inherit the configuration in the services extending the parent. A benefit of this technique is that it makes cleaner and easier to manage service definitions.</p><h3>The Problem</h3><p>In a recent project I was working on I found myself in need to override the arguments of an abstract parent service definition. Although the Symfony documentation is informative and up-to-date with all the framework’s features, the only proposed solution I was able to find for this scenario was by modifying the service configuration using PHP code. Of course, there was the possibility of writing a CompilerPass and changing the arguments but that meant writing a lot of unnecessary code for my case.</p><p>So, I started wondering if the same can be achieved by only using YAML or XML definitions. Looks like it’s possible.</p><h3>Going under the hood</h3><p>One thing that really stuck with me along the years is that whenever I need a good, clean solution for a programming problem I need to get my hands “dirty” and get deeper into the code.</p><p>After a bit of debugging through the DIC component’s internals I found that the DefinitionDecorator::replaceArgument() method is called from the ResolveDefinitionTemplatesPass::resolveDefinition() method if the arguments in the inheriting service definition have a key starting with index_.</p><pre>#Symfony/Component/DependencyInjection/Compiler/ResolveDefinitionTemplatesPass.php</pre><pre>private function resolveDefinition(ContainerBuilder $container, DefinitionDecorator $definition)<br>{<br>...<br>// merge arguments<br>foreach ($definition-&gt;getArguments() as $k =&gt; $v) {<br>    if (is_numeric($k)) {<br>        $def-&gt;addArgument($v);<br>        continue;<br>    }</pre><pre>    if (0 !== strpos($k, &#39;index_&#39;)) {<br>        throw new RuntimeException(sprintf(&#39;Invalid argument key &quot;%s&quot; found.&#39;, $k));<br>    }</pre><pre>    $index = (int) substr($k, strlen(&#39;index_&#39;));<br>    $def-&gt;replaceArgument($index, $v);<br>}<br>…<br>}</pre><pre>#Symfony/Component/DependencyInjection/DefinitionDecorator.php</pre><pre>public function replaceArgument($index, $value)<br>{<br>    if (!is_int($index)) {<br>        throw new InvalidArgumentException(&#39;$index must be an integer.&#39;);<br>    }</pre><pre>    $this-&gt;arguments[&#39;index_&#39;.$index] = $value;</pre><pre>    return $this;<br>}</pre><p>In simple terms, whenever we need to override the argument of a parent service simply add a key starting with ‘index_’ and the index number of the argument. Keep in mind that both Yaml and XML are “translated” to PHP code meaning that the arguments index will start at 0 (zero).</p><p>The downside of this solution is that it is limited to __constructor() dependency injection. If you need to replace a setter or property injected dependency you will have to use their documented solutions.</p><h3>Quick example</h3><p>Below you can find an example configuration for both Yaml as well as XML for overriding the abstract service ‘mail_manager’ second argument’s definition:</p><pre># services.yml</pre><pre>services:<br>    my_alternative_mailer:<br>        # ...</pre><pre>    mail_manager:<br>        abstract: true<br>        arguments:<br>            - &#39;@my_email_formatter&#39;<br>            - &#39;@my_mailer&#39;</pre><pre>    #injecting the alternate mailer<br>    newsletter_manager:<br>        class:  NewsletterManager<br>        parent: mail_manager<br>        arguments:<br>            index_1: &#39;@my_alternative_mailer&#39;</pre><pre>    #using the mailer defined in the parent definition<br>    greeting_card_manager:<br>        class:  GreetingCardManager<br>        parent: mail_manager</pre><pre>#services.xml<br>...<br>&lt;services&gt;<br>    &lt;service id=&quot;my_alternative_mailer&quot; class=&quot;...&quot;&gt;<br>       …<br>    &lt;/service&gt;</pre><pre>    &lt;service id=&quot;mail_manager&quot; abstract=”true”&gt;<br>        &lt;argument type=&quot;service” id=”@my_email_formatter” /&gt;<br>        &lt;argument type=&quot;service” id=”@my_mailer&#39;” /&gt;<br>    &lt;/service&gt;</pre><pre>    &lt;service id=&quot;newsletter_manager&quot; class=&quot;NewsletterManager&quot; parent=&quot;mail_manager&quot;&gt;<br>        &lt;argument type=&quot;service&quot; id=”@my_alternative_mailer&#39;” index=&quot;1&quot; /&gt;<br>    &lt;/service&gt;<br>&lt;/services&gt;</pre><h3>Conclusion</h3><p>If you need to quickly override a parent service arguments definition then this solution might help you accomplish that task faster. Using this technique we remove the need for creating compiler passes and it provides a cleaner and easier solution to managing different dependencies in similar or inheriting services.</p><p>Disclaimer: The Symfony logo is a trademark of Fabien Potencier.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=66d04c1ac91a" width="1" height="1" alt=""><hr><p><a href="https://webthoughts.koderhut.eu/overriding-parent-dependencies-in-symfony-3-66d04c1ac91a">Overriding parent dependencies in Symfony 3</a> was originally published in <a href="https://webthoughts.koderhut.eu">webthoughts.koderhut.eu</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Automatic deployment with Deployer]]></title>
            <link>https://webthoughts.koderhut.eu/automatic-deployment-with-deployer-b3eb39c88665?source=rss----f05255917723---4</link>
            <guid isPermaLink="false">https://medium.com/p/b3eb39c88665</guid>
            <category><![CDATA[php]]></category>
            <category><![CDATA[deployment]]></category>
            <category><![CDATA[deployer]]></category>
            <dc:creator><![CDATA[Denis-Florin Rendler]]></dc:creator>
            <pubDate>Wed, 21 Dec 2016 18:59:14 GMT</pubDate>
            <atom:updated>2016-12-25T08:44:48.478Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/466/0*PV7PVMYgcQwCSr3F.png" /></figure><p>Deployer is a modular, lightweight deployment automation tool. A few of its top features are modularity, atomic deploys and parallel deployments.</p><p>In this article we will create a recipe to automate the deployment of an OctoberCMS app on multiple servers.</p><h3>Prerequisites</h3><p>First, we need to install Deployer on the deployment machine. We have three ways to accomplish this.1</p><ul><li>The simpler way is by downloading the PHAR2 archive from the main website. For this run the following command in the terminal:</li></ul><pre>$ cd ~/ <br>$ mkdir deployer <br>$ cd deployer/ <br>$ wget http://deployer.org/deployer.phar -O deployer <br>$ chmod +x deployer <br>$ mv deployer /usr/local/bin/deployer</pre><ul><li>We can now run the Deployer app from anywhere on our deployment system.</li><li>If you want to integrate Deployer into your project you can use Composer. <br>For this article we will create a separate project so that we do not mix Deployer with our code. <br>In our terminal let’s write the following:</li></ul><pre>$ cd ~/ $ mkdir deployer <br>$ cd deployer/ <br>$ composer require deployer/deployer:~3.0</pre><ul><li>After Composer finishes we can start Deployer by running:</li></ul><pre>$ php vendor/bin/dep</pre><ul><li>For geeks, myself included, you can download the source from the Github repository and then build Deployer from it.</li></ul><pre>$ cd ~/ <br>$ mkdir deployer <br>$ cd deployer/ <br>$ git clone https://github.com/deployphp/deployer.git <br>$ php ./build</pre><h3>Deployment recipes</h3><p>Recipes are PHP script files containing the tasks Deployer needs to perform to deploy the application. By default, Deployer provides a common.phprecipe where, as name suggests, are stored a few frequently used tasks. <br>Note: The common.php file is not downloaded with the PHAR archive so we will need to add it to our deployment project before continuing. You can find a copy on the Github repo from the links below.</p><p>Beside the common tasks recipe the Deployer devs have also added, in their Github repo, recipes for some of the most popular frameworks, like Symfony and Laravel to mention a few. There you can also find boilerplate recipes for platforms like Magento, Drupal or WordPress. Before starting your own recipe from scratch first check-out if there isn&#39;t a boilerplate already built by using the links at the bottom of this article.3</p><p>To start writing our recipe we first need to create a deploy.php file in the root folder of our deployment project. In this file we will define a few tasks to run in order to deploy our OctoberCMS app. <br>After you created the deploy.php file in your preferred editor, add the &#39;common&#39; recipe by adding the next line in the deploy.php file:</p><pre>require_once &#39;recipes/common.php&#39;;</pre><p>Now, let’s configure a few default things. Add the next few lines to the deploy.php file:</p><pre>set(&#39;repository&#39;, &#39;git@github.com:koderhut/YOUR-REPO-URL.git&#39;);</pre><pre>set(&#39;keep_releases&#39;, 2);</pre><pre>set(&#39;shared_dirs&#39;, [<br>    &#39;storage/framework/cache&#39;,<br>    &#39;storage/framework/sessions&#39;,<br>    &#39;storage/framework/views&#39;,<br>    &#39;storage/cms/cache&#39;,<br>    &#39;storage/logs&#39;,<br>    &#39;storage/temp&#39;,<br>    &#39;vendor&#39;,<br>]);</pre><pre>set(&#39;shared_files&#39;, [&#39;.env&#39;]);</pre><pre>env(&#39;code_path&#39;, &#39;/var/www/YOUR_OCTOBERCMS_CODE_PATH_HERE&#39;);</pre><pre>env(&#39;branch&#39;, &#39;master-prod&#39;);</pre><p>The lines above configure Deployer to only keep 2 other versions of our application’s code for us to revert back to in case our current deployment goes wrong. Then, we instruct Deployer to retrieve the code via Git from our repository. This article assumes your repo is publicly available, but if that is not the case I will describe the additional configurations you need in order to use private repos in a later article.</p><p>The shared_dirs and shared_files lines above instruct Deployer to create symlinks to these folders and files in our deployed project folder. This is very useful when you have configuration files which you don&#39;t want to add in a repository or folders containing media files, user uploads or content that is not otherwise required by the application to run.</p><p>In our scenario we need to create symlinks to folders like the logs/ folder, the folder containing the sessions and cache and even our third-party vendors code. These folders are not needed or even recommended to be stored inside our application&#39;s repository. By creating symlinks to these folders we won&#39;t have multiple copies of them occupying unnecessary space or loose session or caching data between the time we copied the files and when our deployment finished.</p><p>Also, the .env file contains environment specific configuration data, such as database login info, which should not be stored within the app repo. <br>The last two configuration lines are setting up a few default values for Deployer. These lines instruct Deployer which Git branch to download from the repository and the path where to deploy the code on the server. The configs are optional at this point and you can also specify them in the server configuration. Specifying them in the server configuration section is useful if you want to use the same deployment script for both testing and production environments.</p><h3>Tagging the deployment</h3><p>*Quick reference: In a Deployer script a task is a PHP closure function that contains the PHP code necessary to deploy our app. The task is created using the task PHP function which is part of the Deployer code. <br>Before deploying the code let&#39;s first create a Git tag. For this we will create a new task called deploy:tag-deployment in our deploy.php file:</p><pre>task(&#39;deploy:tag-deployment&#39;, function() {<br>    $codePath = env(&#39;code_path&#39;);<br>    $time     = date(&#39;d/m/YTH-i-s&#39;);</pre><pre>    cd($codePath);</pre><pre>    runLocally(&quot;git tag -a -m &#39;Deployment of version {$time}&#39; &#39;{$time}&#39;  &amp;&amp; git push origin --tags&quot;);<br>});</pre><p>We’ve now created a new Git tag and pushed it to the main repo. Let’s go ahead and create the deploy task.</p><pre>task(&#39;deploy&#39;, [<br>//    &#39;deploy:tag-deployment&#39;, //ONLY USE THIS TASK IF NOT DEPLOYING USING THE PARALLEL FEATURE<br>    &#39;deploy:prepare&#39;,<br>    &#39;deploy:release&#39;,<br>    &#39;deploy:update_code&#39;,<br>    &#39;deploy:shared&#39;,<br>    &#39;deploy:vendors&#39;,<br>    &#39;deploy:writable&#39;,<br>    &#39;deploy:symlink&#39;,<br>    &#39;deploy:symlink:web&#39;,<br>    &#39;deploy:cp-htaccess&#39;,<br>    &#39;cleanup&#39;,<br>])-&gt;desc(&#39;Deploy your project&#39;);</pre><pre>after(&#39;deploy&#39;, &#39;success&#39;);</pre><p>You may have noticed that we are using the same task function but we are passing an array as the second argument instead of the closure as we&#39;ve previously done. That is no mistake. The task function supports either a closure or an array of task names. Using an array we&#39;re instructing Deployer to call the tasks in the order we set. It is a shorthand for a closure in which we call the tasks ourselves in a specific order. The Deployer docs calls it &#39;task grouping&#39;, I like to call it the &#39;deployment storyline.&#39;</p><p>We finish the script with an after function call to simply print a message that the deployment completed successfully. As with the task function, after is a function defined by Deployer which is executing a closure after the specified task has been executed successfully.4</p><h3>Server configuration</h3><p>We are almost done. We just need to add the server configuration and we will be ready to start deploying the code. <br>Since the server configuration may contain sensitive information we will separate the server configuration data from the script. Go ahead and create a new file called server_config.php and add the following lines:</p><pre>server(&#39;octo-deploy-prod&#39;, &#39;192.168.56.123&#39;, &#39;22&#39;)<br>    -&gt;user(&#39;vagrant&#39;)<br>    -&gt;identityFile(&#39;ssh1/id_rsa.pub&#39;, &#39;ssh1/id_rsa&#39;, &#39;OPTIONAL_SSH_ENCRYPTION_KEY&#39;)<br>    -&gt;env(&#39;deploy_path&#39;, &#39;/var/www/octo.dep&#39;)<br>    -&gt;env(&#39;branch&#39;, &#39;master&#39;)<br>    -&gt;stage(&#39;test-october-deploy&#39;);</pre><pre>server(&#39;octo-deploy-test&#39;, &#39;192.168.56.124&#39;, &#39;22&#39;)<br>    -&gt;user(&#39;vagrant&#39;)<br>    -&gt;identityFile(&#39;ssh2/id_rsa.pub&#39;, &#39;ssh2/id_rsa&#39;, &#39;OPTIONAL_SSH_ENCRYPTION_KEY&#39;)<br>    -&gt;env(&#39;deploy_path&#39;, &#39;/var/deploy_test/www/octo.dep&#39;)<br>    -&gt;env(&#39;branch&#39;, &#39;master&#39;)<br>    -&gt;stage(&#39;test-october-deploy&#39;);</pre><p>*Note: make sure you change the paths to the SSH key file and the deployment path according to your requirements. This setup is only for the purpose of this article and should not be considered full-prof.</p><p>The previous lines define two deployment servers. As I mentioned before, you can configure the deployment path and even the Git branch to be deployed during the server config for a more granular and server-specific configuration. We’ve also named these configurations octo-deploy-prodand octo-deploy-test in order to be able to select the server where we deploy the code during run-time solely based on a parameter. To make it easier for us to deploy the code on multiple servers we also configured a stage5 name.</p><p>In this manner we grouped all the servers and when we start the deployment, Deployer is smart enough to execute the same script on all the servers in the selected stage.</p><p>You can add as many servers you need to this setup. Separating them into testing and production stages you can use the same deploy script to deploy on each environment based on a single parameter passed at run-time.</p><p>Let&#39;s include the servers config into our script by adding the next line of code into the deploy.php file, right under the first require_once:</p><pre>require_once &#39;server_config.php&#39;;</pre><p>That’s the entire deployment script. You can now run it using the following command from the deployment project’s root folder:</p><pre>$ php vendor/bin/dep deploy test-october-deploy</pre><h3>Final thoughts</h3><p>Is simple, isn’t it? You are now ready to start deploying your OctoberCMS apps automatically. In a later article I will provide a few more tips and recipes on deployment using Deployer scripts. Until then, please leave your comments and feedback using the comments section below or review the full files associated with this article on my Github repo.</p><p><a href="https://github.com/rendler-denis/octobercms-deployment-with-deployer">Github repository</a></p><p>References:</p><p>1 — <a href="http://deployer.org/docs/installation">Deployer Installation docs</a> <br>2 — <a href="http://php.net/manual/en/intro.phar.php">PHAR archives</a> <br>3 — <a href="https://github.com/deployphp/deployer/tree/master/recipe">Boilerplate recipes</a> <br>4 — <a href="http://deployer.org/docs/tasks">Deployer docs — Tasks</a> <br>5 — <a href="http://deployer.org/docs/servers">Deployer docs — Servers</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b3eb39c88665" width="1" height="1" alt=""><hr><p><a href="https://webthoughts.koderhut.eu/automatic-deployment-with-deployer-b3eb39c88665">Automatic deployment with Deployer</a> was originally published in <a href="https://webthoughts.koderhut.eu">webthoughts.koderhut.eu</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Access Twig object variables using custom accessors]]></title>
            <link>https://webthoughts.koderhut.eu/access-twig-object-variables-using-custom-accessors-329fda80a9e0?source=rss----f05255917723---4</link>
            <guid isPermaLink="false">https://medium.com/p/329fda80a9e0</guid>
            <category><![CDATA[php]]></category>
            <category><![CDATA[twig]]></category>
            <category><![CDATA[octobercms]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Denis-Florin Rendler]]></dc:creator>
            <pubDate>Wed, 21 Dec 2016 18:53:54 GMT</pubDate>
            <atom:updated>2016-12-21T18:53:54.043Z</atom:updated>
            <content:encoded><![CDATA[<p>Twig is a lightweight templating engine built by the team developing the Symfony project. One of its goals is to offer a friendly environment for developers as well as designers. Its simple language is intuitive but powerful enough to allow easy object and array accessing and even more complex expressions. <br>A few of its features are:</p><blockquote><strong><em>Fast</em></strong><em>: Twig compiles templates down to plain optimized PHP code. The overhead compared to regular PHP code was reduced to the very minimum.</em></blockquote><blockquote><strong><em>Secure</em></strong><em>: Twig has a sandbox mode to evaluate untrusted template code. This allows Twig to be used as a template language for applications where users may modify the template design.</em></blockquote><blockquote><strong><em>Flexible</em></strong><em>: Twig is powered by a flexible lexer and parser. This allows the developer to define its own custom tags and filters, and create its own DSL</em></blockquote><blockquote>Quote from the <a href="http://twig.sensiolabs.org/doc/intro.html">Twig documentation</a></blockquote><p>In this article I will be talking about a problem I faced during the development of a plug-in for OctoberCMS: accessing object data inside a Twig template using custom accessors.</p><h3>Short description of the context</h3><p>The plug-in I am referring to is called <a href="https://octobercms.com/plugin/koderhut-templatetokens"><strong>TemplateTokens</strong></a> and it allows an OctoberCMS end-user or theme developer to add custom variables, also called tokens inside the plug-in. These tokens will be available inside the Twig templates during rendering and, at least for the moment, only output the set value.</p><p>The problem I was facing was the fact that I wanted to maintain the simple method of accessing a variable that Twig provided and at the same time be able to provide the end-user a flexible way of configure, update or delete a token.</p><p>Twig provides two ways of accessing object properties in its templates: either using the dot(.) notation or thorough the attribute() function. Both of these methods follow the same code path:</p><blockquote><em>- check if foo is an array and bar a valid element;</em></blockquote><blockquote><em>- if not, and if foo is an object, check that bar is a valid property;</em></blockquote><blockquote><em>- if not, and if foo is an object, check that bar is a valid method (even if bar is the constructor — use __construct() instead);</em></blockquote><blockquote><em>- if not, and if foo is an object, check that getBar is a valid method;</em></blockquote><blockquote><em>- if not, and if foo is an object, check that isBar is a valid method;</em></blockquote><blockquote><em>- if not, return a null value.</em></blockquote><blockquote>Quote from the <a href="http://twig.sensiolabs.org/doc/templates.html#variables">Twig documentation</a></blockquote><p>As you may notice through its object access logic, Twig offers out of the box a quick way to access dynamic properties using custom accessors.</p><p>In my case I am retrieving a collection of objects from the database. I then expose that collection of objects, through a wrapper to the Twig templates. This allowed me to maintain the Twig’s dot notation method of accessing variables and at the same time control the data to be accessed.</p><p>The idea covered all my requirements. It kept the templates simple by providing the end-user, or theme developer, the Twig’s dot notation way to access the data. At the same time the wrapper offers a simple interface to access the stored data. Dispatching the retrieving of the tokens from a data source to the collection object.</p><p>Now, let’s see how we can access the data.</p><h3>One Problem, 3 solutions</h3><p>After a bit of debugging and code browsing I was able to find three possible ways of attacking this problem.</p><p><strong>1. Implement a single method into the wrapper that will be called using the <em>attribute()</em> function I mentioned above</strong> <br>This solution means that you will have a single method in your object that you would call from the template and pass it the name of the token. Although, this conforms with the Twig’s standards it will quickly become tiresome for the end-user. <br>Think what it will take to write 20 times the following line:</p><pre>{{ attribute(foo, &#39;getData&#39;, &#39;token_name&#39;) }}</pre><p>It’s not that fun, right? It is also prone to user errors. <br>Luckily the next two solutions will make it a bit easier for the end-user.</p><p><strong>2. Implement the magic methods <em>__get()</em> and <em>__isset()</em> into the wrapper</strong> <br>This is a simple approach and is also one of the proposed methods described in the <a href="http://twig.sensiolabs.org/doc/recipes.html#using-dynamic-object-properties">Twig docs</a>. <br>This solution is possible because of the second check from the list we viewed above — <strong>if foo is an object, check that bar is a valid property.</strong> <br>It accomplishes this by calling the PHP magic method <strong>__isset()</strong> and providing it with the name of the property to check. This in turn creates a new problem: the object needs to know which properties exist. For dynamic properties you will need to implement custom logic to check if the parameter passed is a valid property of the object. <br>In my case that was handled by the collection while the wrapper, which was accessed by Twig directly, did not have a clue if a token with that name existed. Luckily, the Eloquent collection object had a few methods that helped with this. Good guy, Eloquent. :) <br>To continue, if the <strong>__isset()</strong> method returns TRUE then Twig continues and tries accessing the variable as:</p><pre>$object-&gt;varName;</pre><p>Below is a quick example of a class definition allowing for this type of access:</p><pre>class A <br>{<br>    public function __isset($varName)<br>    {<br>        return (bool)isset($this-&gt;$varName);<br>    }</pre><pre>    public function __get($varName)<br>    {<br>        return $this-&gt;$varName;<br>    }<br>}</pre><p>This means that Twig will call the <strong>__get()</strong> magic method to retrieve the value. <br>You can find a quick working example in the repo I prepared for this article or the Twig docs.</p><p><strong>3. Implement the magic method __call() into the wrapper</strong> <br>*<strong>Warning: This solution is not documented and it might break the Twig standards.</strong> <br>This is one of the most flexible solution I could think of, simply because:</p><ul><li>it requires implementing only 1 method, the <strong><em>__call()</em></strong> magic method</li><li>it allows the end-user to access both object properties as well as object methods using the same exact simple notation that Twig offers</li><li>it makes just one call to your object, instead of two as per the above solution</li><li>it allows for customized access rules — permitting access to otherwise unavailable methods or properties.</li></ul><p>But, being flexible does come with a few tradeoffs:</p><ul><li>having only one method to implement means that you will have to check if the parameter received is either a method of the object or a property</li><li>it also means that you will have to implement the logic for accessing the object data as well as the restrictions and validations needed all in one method. I know that we can use some fancy design patterns here, but it is still only one method with more than one responsibility.</li></ul><pre>public function __call($method, $params)<br>{<br>    if (isset($this-&gt;$method) || method_exists($this, $method)) {<br>        //Do something cool here<br>    }</pre><pre>    return $aValue;<br>}</pre><h3>Conclusion</h3><p>Depending on your requirements you can use any of the methods described in this article. I found the call to the __get() method useful with Magento&#39;s models which have their own implementation of __get() method to retrieve the data. And the second solution fits perfectly with the current implementation for the plug-in I mentioned. <br>Checkout the Github repo examples and let me know your thoughts, examples or feedback using the form below.</p><p>Links</p><ol><li><a href="https://github.com/rendler-denis/twig_var_access_examples">Github repo</a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=329fda80a9e0" width="1" height="1" alt=""><hr><p><a href="https://webthoughts.koderhut.eu/access-twig-object-variables-using-custom-accessors-329fda80a9e0">Access Twig object variables using custom accessors</a> was originally published in <a href="https://webthoughts.koderhut.eu">webthoughts.koderhut.eu</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Configure access permissions for OctoberCMS plug-ins]]></title>
            <link>https://webthoughts.koderhut.eu/configure-access-permissions-for-octobercms-plug-ins-2e9c61c378cc?source=rss----f05255917723---4</link>
            <guid isPermaLink="false">https://medium.com/p/2e9c61c378cc</guid>
            <category><![CDATA[octobercms]]></category>
            <category><![CDATA[laravel]]></category>
            <category><![CDATA[php]]></category>
            <dc:creator><![CDATA[Denis-Florin Rendler]]></dc:creator>
            <pubDate>Wed, 21 Dec 2016 18:51:35 GMT</pubDate>
            <atom:updated>2016-12-21T18:51:34.670Z</atom:updated>
            <content:encoded><![CDATA[<p>While updating one of my OctoberCMS plug-ins, I wanted to enable admins to restrict access to the configuration section of the plug-in. This meant implementing the permissions API of the OctoberCMS platform. Although the documentation for the platform is starting to cover a lot of its features, it seems there are still a few aspects missing.</p><p>The documentation covers well how to set up permissions for when using backend controllers for plug-in configuration, but no mention of how to accomplish the same when using a SettingsModel implementation.</p><h3>Hint: It’s all in the plug-in definition</h3><p>To register an access permission open your ‘Plugin.php’ plug-in definition class and add the following method:</p><pre>class Plugin<br>    extends PluginBase<br>{<br>    ...<br>    public function registerPermissions()<br>    {<br>        return [<br>            &#39;vendor_name.plugin_name.config_permission&#39; =&gt; [<br>                &#39;label&#39; =&gt; &#39;Permission Label&#39;,<br>                &#39;tab&#39;   =&gt; &#39;Tab Name&#39;<br>            ],<br>        ];<br>    }</pre><pre>}</pre><p>The <strong>registerPermissions()</strong> method configures a permission type that you will need to add to a backend user in order to allow that user access to those configuration options. <br>The method needs to return an array of permissions that have the same form as in the code section above. The array element’s key is composed from the vendor name, plug-in name and an identifier for the permission all connected using the dot notation. The value of the array’s element is itself an array composed of two required elements: label and tab. The label is used to display a friendly text in the user interface. The tab element is used for grouping multiple permissions under specific tabs for easier access.</p><p>Now that we set the permissions let’s instruct OctoberCMS to enforce these rules. <br>To do so, in your plug-in’s definition class edit the <strong>registerSettings()</strong> method:</p><pre>class Plugin<br>    extends PluginBase<br>{<br>    ...<br>    public function registerSettings()<br>    {<br>        return [<br>            &#39;name_of_group&#39; =&gt; [<br>                &#39;label&#39;       =&gt; &#39;Group label&#39;,<br>                &#39;description&#39; =&gt; &#39;Quick description&#39;,<br>                &#39;icon&#39;        =&gt; &#39;icon-rss&#39;,<br>                &#39;class&#39;       =&gt; &#39;VendorName\PluginName\Models\ModelClassName&#39;,<br>                &#39;order&#39;       =&gt; 500,<br>                &#39;keywords&#39;    =&gt; &#39;rss feed&#39;,<br>                &#39;category&#39;    =&gt; &#39;Settings category&#39;,<br>                &#39;permissions&#39; =&gt; [&#39;vendor_name.plugin_name.config_permission&#39;],<br>            ]<br>        ];<br>    }</pre><pre>}</pre><p>Quick recap on the configs needed</p><p>The <strong>registerSettings</strong>() method is used to configure a plug-in’s settings page inside the OctoberCMS platform. This is well documentated in the <a href="https://octobercms.com/docs/plugin/settings">docs</a> and I will not go in too much detail in this article, but only point out a few of the key settings which I had trouble understanding too. <br>First, the key of each element in the returned array represents the name that OctoberCMS will use to identify a configuration page and also generate the links to it if you are implementing the SettingsModel for plug-in configuration. You can use any name you want as keys, but be careful because that name will be used in URLs as well. So, it needs to be URL friendly. <br>The category key is used to group the different configuration pages of a plug-in in the left sidebar of the Settings section. One issue that I had was the fact that I was using this config as a vendor name group tag, but OctoberCMS uses the exact value of that key in order to generate the group-code attribute of the tag. This means that if you use a translation tag for the value, OctoberCMS will automatically generate a separate group for two plug-ins even though the translated value is the same. <br>Finally, the permissions tag. The value for this key requires to be an array of values representing the permissions a user requires to have in order to gain access to that specific configuration page. The values for this tag are combined in the same way as we registered them above: the vendor_name followed by the plug-in name and either a * (asterisk symbol) or a specific access identifier. Using the asterisk symbol means that any user which has any permission of type <em>vendor_name.plugin_name</em> can access that page.</p><h3>Conclusion</h3><p>To make the long story short, first register the permission identifiers with the OctoberCMS platform by using the <strong>registerPermissions</strong>() method. Then, add a permissions key with an array of permission identifiers to each configuration page you register with OctoberCMS through the <strong>registerSettings</strong>() method. <br>The permissions key we discussed throughout article also applies when your plug-in uses backend controllers. In this case OctoberCMS will only hide the link to your configuration page, but the page will still be accessible with a direct link. To properly secure the config page when using a backend controller you also need to add a few validations as described in the <a href="https://octobercms.com/docs/backend/users#page-access">documentation</a>.</p><p>*Disclaimer: The OctoberCMS name and logo used in this article are property of <a href="http://octobercms.com/">octobercms.com</a> and its creators.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2e9c61c378cc" width="1" height="1" alt=""><hr><p><a href="https://webthoughts.koderhut.eu/configure-access-permissions-for-octobercms-plug-ins-2e9c61c378cc">Configure access permissions for OctoberCMS plug-ins</a> was originally published in <a href="https://webthoughts.koderhut.eu">webthoughts.koderhut.eu</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Extending an OctoberCMS component]]></title>
            <link>https://webthoughts.koderhut.eu/extending-an-octobercms-component-1c981910ed30?source=rss----f05255917723---4</link>
            <guid isPermaLink="false">https://medium.com/p/1c981910ed30</guid>
            <category><![CDATA[octobercms]]></category>
            <category><![CDATA[php]]></category>
            <category><![CDATA[laravel]]></category>
            <dc:creator><![CDATA[Denis-Florin Rendler]]></dc:creator>
            <pubDate>Wed, 21 Dec 2016 17:25:26 GMT</pubDate>
            <atom:updated>2016-12-21T17:25:26.109Z</atom:updated>
            <content:encoded><![CDATA[<p>When it comes to blogging and small websites there is no doubt that WordPress comes first in everyone&#39;s mind. That’s because of the huge amount of plug-ins its ecosystem provides. Basically, if you think <strong>it</strong>, the probability is that <strong>it</strong> exists for WordPress.</p><p>Myself, I believe I am a security conscious person and for me WP doesn’t have a good track record in this regard. This coupled with the fact that most of the WP code, if any, is not built using an object oriented arhitecture and making it very difficult at times to extend or maintain it, made me search for an alternative.</p><p>That’s how I found <a href="http://www.bolt.cm/">BoltCMS</a> and <a href="http://www.octobercms.com/">OctoberCMS</a>. Both of these have the easines of use that WP gives a writer, but unlinke WP these CMS’s help the developer as well. Both systems code is based on industry standards giving developers a clean environment to work with either maintaining the system or adding new functionality as need requires. It also helps designers and front-end developers by integrating <a href="http://twig.sensiolabs.org/">Twig</a>, a nice, friendly templating engine allowing them to bring their vision of the UI to the users using a flexible, secure and easy to learn syntax and making template portability between CMSs a breeze.</p><p>In this article I will talk about OctoberCMS only, but be sure to check-out BoltCMS too. If you just want to see the code jump at the bottom of the article to find the Github link.</p><h3>Digging into the code</h3><p>I presume you already have OctoberCMS install and if not you can use the link above. I will not detail here the installation process for OctoberCMS because it is an easy process and well documentated on their own website.</p><p>Enough with the introductions and let’s jump into the code by extending a component from a third-party plugin.</p><p>For this article we will extend <a href="https://octobercms.com/plugin/rainlab-blog">RainLab’s Blog plugin</a> and more precisly the Post component. We will add a new property and a new attribute to each post that will contain the full article URL.</p><p>Before we begin we need to install RainLab’s Blog plug-in for OctoberCMS. So, head over to the plug-in’s <a href="https://octobercms.com/plugin/rainlab-blog">page</a> and checkout the installation instructions.</p><p>After installing the plug-in we will need to create a new plug-in of our own in order to not have our work erased when we update the Blog plug-in.</p><p>So, in the console write the following command:</p><pre>$ php artisan create:plugin KoderHut.BlogExtension</pre><p>You can substitute <em>KoderHut</em> with your own name or your company’s name.</p><p>At this point we created our first OctoberCMS plug-in. This command will only create a skeleton folder structure and add a class called Plugin and a Yaml file called version.yaml which will store the versions of our plug-in. You can find more info on this latter file in the docs.</p><p>Inside the Plugin class you will find a method called pluginDetails(). This method is used by OctoberCMS to display the info about the plug-in to the end-user. Check-it out and update the info with your own details.</p><p>Next let’s tell OctoberCMS that our plug-in requires RainLab’s Blog plug-in in order to function.</p><p>To do this just add the next line into the Plugin class:</p><pre>/**<br> * Plugin dependecies<br> *<br> * @var array<br> */<br>public $require = [&#39;RainLab.Blog&#39;];</pre><p>The $require class property is read by OctoberCMS to check for dependencies amoung plug-ins.</p><p>Now that we added our dependency let’s proceed and overwrite RainLab’s Post component. For this we need to create a new component in our module. Do this by writing the following in your command line:</p><pre>$ php artisan create:component KoderHut.BlogExtension Post</pre><p>I warned you that OctoberCMS will make your developer life easy, no?</p><p>Now, you should find that a new folder, named components, was created in your plug-in folder. It contains a class called Post, a folder with the same name with a Twig file called default.htm. The Twig file is used to render the component’s HTML, but in our case we will not need it and you can delete it.</p><p>By default, your Post class will extend the ComponentBase class. We will need to change that to extend RainLab’s Post class instead.</p><p>First, update the ‘use’ statements with the following:</p><pre>use Cms\Classes\ComponentBase,<br>    Cms\Classes\Page;</pre><pre>use RainLab\Blog\Components\Post as RainLabPost,<br>    RainLab\Blog\Models\Post as BlogPost;</pre><p>Next, update the class definition to this:</p><pre>/**<br> * Class Post<br> *<br> * @package KoderHut\BlogExtension\Components<br> */<br>class Post<br>    extends RainLabPost<br>{...}</pre><p>Now, let’s update the properties definitions and add a new property to set the page that will be used to view the post. This page also contains information about the URL which we will need in order to properly generate the URL back to the post.</p><p>So, add the following code to the Post class and overwrite any existing methods:</p><pre>/**<br> * Override of original method<br> * - add new setting for the post page id<br> *<br> * @return array<br> */<br>public function defineProperties()<br>{<br>    $parentProps = parent::defineProperties();</pre><pre>    $properties = array_merge(<br>        $parentProps,<br>        [<br>            &#39;postPage&#39; =&gt; [<br>                &#39;title&#39;       =&gt; &#39;rainlab.blog::lang.settings.posts_post&#39;,<br>                &#39;description&#39; =&gt; &#39;rainlab.blog::lang.settings.posts_post_description&#39;,<br>                &#39;type&#39;        =&gt; &#39;dropdown&#39;,<br>                &#39;default&#39;     =&gt; &#39;blog/post&#39;,<br>                &#39;group&#39;       =&gt; &#39;Links&#39;,<br>            ],<br>        ]<br>    );</pre><pre>    return is_array($properties) ? $properties : $parentProps;<br>}</pre><pre>/**<br> * Retrieve the postPage properties<br> *<br> * @return string<br> */<br>public function getPostPageOptions()<br>{<br>    return Page::sortBy(&#39;baseFileName&#39;)-&gt;lists(&#39;baseFileName&#39;, &#39;baseFileName&#39;);<br>}</pre><p>The <strong>defineProperties</strong>() method adds the new property to the component, while the getPostPageOptions() method retrieves the stored setting.</p><p>Now, we add the URL back to the post by overriding the loadPost() method with the following code:</p><pre>/**<br> * Reference to the page name for linking to posts.<br> * @var string<br> */<br>public $postPage;</pre><pre>/**<br> * Override of original method<br> * - add the post URL to the post entity<br> *<br> * @return mixed<br> */<br>protected function loadPost()<br>{<br>    $post     = parent::loadPost();<br>    $postPage = $this-&gt;property(&#39;postPage&#39;);</pre><pre>    if ($post instanceof BlogPost) {<br>        $post-&gt;setUrl($postPage, $this-&gt;controller);<br>    }</pre><pre>    return $post;<br>}</pre><p>The final step to use our new component is either drag and drop it from the OctoberCMS components tab into the page, or if your page already contains the Post component change the component definition code from this:</p><pre>[blogPost]<br>slug = &quot;{{ :slug }}&quot;<br>postPage = &quot;blog/post&quot;</pre><p>to this:</p><pre>[KoderHut\BlogExtension\Components\Post blogPost]<br>slug = &quot;{{ :slug }}&quot;<br>postPage = &quot;blog/post&quot;</pre><h3>Final thoughts</h3><p>Although OctoberCMS is in beta at the moment of writing this article I can already see the power and flexibility it provides to both non-technical end-users as well as developers. For end-users, the plug-in system is well designed and simple that with just a few clicks one can add all types of functionality. The number of plug-ins rises fast for both paid as well as free and pretty soon I believe there will be no need to go back to WP.</p><p>Also, having its code well organized, following industry standards and taking full advantage of the Laravel 5 framework that powers it, as well as all the design patterns and good practices established by the community it makes a developer’s life really easy.</p><p>If you missed any step or just want to quickly checkout the source code for the plug-in, you can find it on my Github account or visiting the link below.</p><p>I hope you enjoyed the article. Any feedback is appreciated and sharing is welcomed.</p><p>BlogExtension — Github source: <a href="https://github.com/rendler-denis/blogextension">https://github.com/rendler-denis/blogextension</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1c981910ed30" width="1" height="1" alt=""><hr><p><a href="https://webthoughts.koderhut.eu/extending-an-octobercms-component-1c981910ed30">Extending an OctoberCMS component</a> was originally published in <a href="https://webthoughts.koderhut.eu">webthoughts.koderhut.eu</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>