The post Page Specific Dynamic Angular Components using Child Routes appeared first on Oleksandr Molochko.
]]>When I first started developing an Angular application everything seemed to go well. Comparing to React, a lot of things were implemented out of the box. Until I reached the time when I needed to make some components of my application dynamic.
Almost every admin panel SPA looks the same. We have a sidebar for navigation, a topbar and the main content, that represents a work area of the application. This layout is UX-friendly, however sometimes we need to change not only the main layout, but other parts like a topbar or a sidebar.
In React this can be implemented easily with a few lines via <Router> and <Route> (in Angular too, as we will see later). All suggested solutions, that I googled, were too complex, and required some tricks. They were not hard to understand, but required steps that looked overcomplicated.
Possible solutions:
We need some code to play with. There is a great admin panel https://github.com/akveo/ngx-admin. It has a nice layout and design. I have forked this repo for the tutorial. Clone my repo using the following command
git clone https://github.com/CROSP/dynamic-angular-components && git checkout d5ff97c87d3d03363cd2f34dc7b627dd5388f335I won’t cover the structure of this project, hence this is out of scope of the tutorial. We will go step by step, hope everything will be clear enough. Navigate to the cloned directory and execute the following command to install depenendencies.
npm installTo run development server execute the next command.
npm startAfter the application is compiled you should be able to access it via http://localhost:4200/
The main idea is to add some specific components/elements to the topbar and the sidebar for specific pages.
Let’s take two pages for this tutorial.
Furthermore we will change this static title to a dynamic one using Routes Data.
Let’s start from an easier task. We will make our title to changin according to a selected currenlty selected page.
To pass extra data to routes define a property named data like that.
Next open the Header Component file (header.component.ts).
Inject ActivatedRoute and Router as constructor dependencies as follows:
Now we need to subscribe to route events to get extra data.
private setTitleFromRouteData(routeData) { if (routeData && routeData[‘title’]) { this.pageTitle = routeData[‘title’]; } else { this.pageTitle = ‘No title’; } } private getLatestChild(route) { while (route.firstChild) { route = route.firstChild; } return route; } private subscribeToRouteChangeEvents() { // Set initial title const latestRoute = this.getLatestChild(this.activeRoute); if (latestRoute) { this.setTitleFromRouteData(latestRoute.data.getValue()); } this.router.events.pipe( filter(event => event instanceof NavigationEnd), map(() => this.activeRoute), map((route) => this.getLatestChild(route)), filter((route) => route.outlet === ‘primary’), mergeMap((route) => route.data), ).subscribe((event) => { this.setTitleFromRouteData(event); }); }Of course you can adjust this subscription for your needs, but it should be clear what is going here. Call this method inside the ngOnInit hook. Finally add the pageTitle field and use it in the template.
As a result you should get title chainging while navigating between pages of the application.
This was just a bonus, let’s move to the main part.
First of all, we need to define a new router outlet in places where we need our dynamic components.
Open the header.component.html file and add new outlet.
Also add RouterModule import to the ThemeModule. This step is required to recognize the router-outlet tag.
const BASE_MODULES = [CommonModule, RouterModule, FormsModule, ReactiveFormsModule];Secondly, add another outlet to the Sample Layout Component (sample.layout.ts)
… …Finally, in order to add page specific content to the created outlets, firstly go to the Table Routing module (tables-routing.module.ts) and change it to look like that.
const routes: Routes = [{ path: ”, children: [{ path: ‘smart-table’, children: [ { path: ”, data: {title: ‘Tables’}, component: SmartTableComponent, }, { outlet: ‘header-top’, path: ”, component: TablesHeaderComponent, }], }], }];As agreed, we need to add two buttons to the header on the Tables page. Define a simple dummy component.
@Component({ selector: ‘ngx-table-header-component’, template: ` ` , }) export class TablesHeaderComponent { }
After that, wait for compilation and navigate to the route http://localhost:4200/#/pages/tables/smart-table
In case everything is done right you should get the following output.
As you could see, we have just added desired behavior without refs and other suggested techiques.
To complete our task let’s move our attention to the IOT Dashboard page. Routing configuration for that page is located in the root pages routing module (pages-routing.module.ts).
As you may already guessed it should be defined in the following way.
Create components, declare them in module and see the result.
For the header part I will use the following layout:
The sidebar component is implemented in this fashion:
@Component({ selector: ‘ngx-dashboard-sidebar-component’, template: `Declare components in the module, recompile an application and you should see a similar result.
Everything work pretty well.
In this tutorial I’ve described a declarative way for changing components dynamically based on the current route without any references. You can use this method for other cases in your application. Hope this article will be useful for someone.
You can find full project source code at this repository:
The post Page Specific Dynamic Angular Components using Child Routes appeared first on Oleksandr Molochko.
]]>The post Understanding Dagger 2 Scopes Under The Hood appeared first on Oleksandr Molochko.
]]>Dagger 2 is a compile-time DI framework that is becoming increasingly popular in Android development. Using Dagger 2 can make application structure more scalable and maintainable. However, in this post I am not going to tell you about all benefits of using Dagger 2. This article is intended for developers that already had some experience using this framework and are eager to learn it a little bit deeper. One feature of Dagger 2 is the concept of Scope. Scopes are not easy to understand from the first time of use and require some effort to get used to them for applying them correctly in a application.
If you are reading this article, I assume that you already have got your hands dirty playing with Dagger 2 and familiar with Component, Provider, Inject annotations and their purpose.
Before diving into the Scope mechanism, you should have a quite decent understanding of when and why the dependency injection mechanism is applied. Furthermore, I won’t describe some core theoretical concepts like a dependency graph, components and providers.
All code from the article can be found on GitHub.
What is a scope?
Scope – is a memory boundary that contains injectable dependency objects, with a component as an entry point, the lifetime of this memory region (dependency objects) is defined by the number or references to the scoped component.
The definition above may be rough and not totally correct from some technical aspects, but I want to stress one important thing, that all objects in a scope will live as long as a Dagger component instance lives. All in all, there is nothing special about components or scopes, all usual Java/Android garbage collecting and reference counting mechanisms work here in the same way. The same instance of a scoped dependency is injected, when the same component is used.
Don’t misunderstand me, if your component is destroyed but a dependency instance is still being kept by some object, of course it won’t be destroyed, but next time you inject dependencies another instance will be provided.
Scopes are basically used during compile time for generating injectors and providers. We will see in a while how generated classes are used in runtime to maintain scoped dependencies.
Let’s create a really simple project for understanding the matter. I will take the liberty of violating all design best practices and principles for the sake of simplicity. (the best excuse for writing terrible code
). I have used some boilerplate classes like BaseActivity, BaseFragment, explanation of these classes is out of scope of this article, hence you can find full code on Github.
Firstly, we need to define some scopes to have something to investigate. I have defined quite common scopes that are usually used in Android Development.
As you can see from the picture above all scopes are really tied to Android SDK classes and have respective names. Why do we usually have such scopes in android applications? Because we as developers are driven by frameworks, in that case it is Android SDK and we have not much control over the lifecycle of core system components like activities, fragments.
The app itself consists of two activities and four fragments. Each of activities is responsible for keeping two fragments respectively. You can switch between fragments inside a single activity or switch to another activity.
The DI component diagram represents general project structure.
Please note the color of each component, it is assigned according to the scope of the component. As we have multiple components and they form the hierarchy, we need to define relationships between components. I have used the @Subcomponent relationship between child and parent components. Subcomponents have an access to the entire objects graph from parent components, including transitive dependencies. We will see later what happens under the hood with parent and child components.
Names are chosen just to make it clear which Android class a component is related to. I would have never used such names in real application.
First of all, we need to define custom scopes, from the code perspective, a scope is nothing more than annotation. We will have multiple Scope annotations:
A custom scope is defined as follows:
@Scope @Retention(RUNTIME) public @interface PerApplication {}In order to create a Subcomponent, we need to access a parent component, hence let’s define an interface for that purpose.
public interface ProvidesComponentIt can be used as follows. Casting is not the best idea, however it is almost impossible to live without this feature in the Android world.
private void initDependencyComponents() { ApplicationComponent applicationComponent = ((ProvidesComponentFirst of all we need to understand how do Unscoped dependencies behave and implemented in generated code.
Let’s create the Application Component and related modules.
@PerApplication @Component(modules = { ApplicationModule.class}) public interface ApplicationComponent { // Injection methods void inject(Dagger2ScopeInternalsApplication mainApplication); }Also create the Application module with some dependencies.
@Module public class ApplicationUnscopedModule { Application mApplication; public ApplicationUnscopedModule(Application application) { mApplication = application; } @Provides Application provideApplication() { return mApplication; } @Provides @Named(GLOBAL_APP_CONTEXT) public Context provideApplicationContext() { return mApplication.getApplicationContext(); } @Provides public CarDataRepositoryContract provideCarDataRepository() { return new CarDataRepository(); } }I have defined a simple interface CarDataRepositoryContract, without any method and implemented it. Next we need to set dependencies in Dagger2ScopeInternalsApplication, BaseActivity and MainActivity classes. Our main application class should look something like that for now.
public class Dagger2ScopeInternalsApplication extends Application implements ProvidesComponentAs you can see from the code above, we are building the Application Component, we need to store it as the class field to be able to access it from activities.
In activities to get the Application Component we are casting the Application class to the ProvidesComponent
Finally, in the MainActivity class we also injecting dependencies (only one dependency) using the component from BaseActivity as follows.
public class MainActivity extends BaseSingleFragmentActivity implements MainFirstFragment.SecondFragmentRouter { @Inject CarDataRepositoryContract mCarRepoOther; //… Other variables @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); injectDependencies(); //… Further initialization } private void injectDependencies() { mAppComponent.inject(this); } }You may wonder why do we need CarDataRepositoryContract in BaseActivity and MainActivity. In real world this doesn’t have any sense, but for our tutorial it will show how unscoped dependencies are provided. Let’s figure this out.
Build and launch the application, but set brakepoints on lines in all classes that have CarDataRepositoryContract as a dependency (e.g. the onCreate method).
CarDataRepository@3611 CarDataRepository@3778 CarDataRepository@3779As you can see from results above, we got a new instance every time we requested dependencies. Even if you call the inject method multiple times in the same class you will get a new instance anyway.
Definitely, if we injected the Application dependency in activities it would be set to the same instance, as long as this object itself is a singleton.
You can get the code for that example from this commit.
To conclude:
It’s time to look under the hood. Open the implementation of the ApplicationComponent interface.
You can find all generated files in the generatedJava directory or by navigating to implementations of the interface. Another way is just to find the class with the Dagger prefix for your component, for instance ApplicationComponent will have DaggerApplicationComponent implementation generated.
Here is the full code of the ApplicationComponent implementation, if you followed the described steps you should have a similar one.
public final class DaggerApplicationComponent implements ApplicationComponent { private ProviderLet’s start our journey from Components. Components are major building blocks of the Dagger 2 library. A Component implementation acts as a container (i.e. orchestrator) for all other players of Dagger 2. When you build a component you have to provide all dependencies that cannot be resolved automatically. Anyway, we are somehow addicted to Android SDK, hence Context, Application, Service classes are usually provided to modules to construct corresponding components.
Let’s discuss some snippets of the generated implementation. As you could see the creation of the component itself happens inside the Builder nested class. As a result you won’t be able to create the Component directly or to create it without provided required modules.
Components are not injecting dependencies by themselves, they just delegate this task to the MembersInjector classes.
@Override public void inject(Dagger2ScopeInternalsApplication mainApplication) { dagger2ScopeInternalsApplicationMembersInjector.injectMembers(mainApplication); }The creation of MembersInjectors happens after all Providers are created in the initialize() method.
MembersInjector classes are exactly responsible for injecting dependencies into dependent objects. Roughly speaking any class that requires dependencies and has at least one @Inject annotation has it’s corresponding injector.
Let’s have a look at the MainActivity_MembersInjector class.
public final class MainActivity_MembersInjector implements MembersInjectorIn order to a MembersInjector be created all dependencies should be provided via Provider classes passed to the create method.
Please pay attention to the highlighted lines (20,21,21). I have intentionally not mentioned this while describing the application structure. When a parent class has dependencies they should be satisfied before injecting dependencies in the child class. And that totally makes sense, otherwise you could get NullPointer Exceptions thrown. Dagger 2 is smart enough to detect such kind of relationships while building a dependency graph and generating classes. You can even remove the void inject(BaseActivity baseActivity) method, but the BaseActivity_MembersInjector class will still be generated.
When actual injection happens a MembersInjector instance delegates this request to a corresponding Provider.
Now, we have reached the most interesting part from that article’s perspective – Provider. Provider is a guy that uses created modules to provide dependencies. Let’s have a look at the implementation of the ApplicationUnscopedModule_ProvideCarDataRepositoryFactory.
public final class ApplicationUnscopedModule_ProvideCarDataRepositoryFactory implements FactoryAt line 14 you can see a direct call to the ApplicationUnscopedModule module. That’s it, we finally reached the source of the dependency.
I hope you have better understanding of how injection happens and flows. It’s time to understand how scopes affect all this stuff.
We need to create other components as they were defined at the diagram in the beginning of the article. For the simplicity I will have a single module per a component.
Due to the fact that child components and parent components are connected via the @Subcomponent mechanism, to create a child component we need to have access to the parent component. A parent component is responsible for creating a sibling component, but all required dependencies to modules are provided by the child component.
Let’s define subcomponents according to the diagram and modify existing ApplicationComponent. I think it will be better to start from the bottom and move upwards.
This component corresponds to the MainFirstFragment class and we need only one dependency to be injected in it, that is a router that will handle navigation between views.
public class MainFirstFragment extends BaseFragment { // Callbacks @Inject SecondFragmentRouter mRouter; // … other stuff public interface SecondFragmentRouter { void onSwitchToSecondFragment(); } }The containing activity will acts as a router, therefore we provide this dependency in the following way.
@Module public class MainFirstFragmentModule { @Provides @PerFragment public MainFirstFragment.SecondFragmentRouter provideSecondFragmentRouter(Activity containerActivity) { return (MainActivity) containerActivity; } }As far as we are using @Subcomponents we need to provide creator methods in the parent component to create children (plus methods). The Activity acts as a router and injected into both of child fragment components, albeit providing each component only required methods (Interface Segregation).
public class MainActivity extends BaseSingleFragmentActivity implements ProvidesComponentThe ActivityComponent component provides a more generic dependency graph with commonly used dependencies (FragmentManagers, LayoutInflaters) or Context specific dependencies. As you may already know, you can get different type of the Context class in runtime and using not applicable one may cause various problems, in the event of this you must not inject the Global Context(Application) everywhere. This only one case why we may need such kind of component.
In a like manner we are declaring creator methods for sibling components.
Owing to using Subcomponents in order to create a child component we need firstly to get a parent one. Here is a chain component creation for better understanding.
Dagger2ScopeInternalsApplication
mApplicationComponent = DaggerApplicationComponent.builder() .applicationUnscopedModule(new ApplicationUnscopedModule(this)) .build();BaseActivity
ApplicationComponent appComponent = ((ProvidesComponentMainActivity
mMainScreenComponent = mActivityComponent.plusMainScreenComponent(); mMainScreenComponent.inject(this);MainFirstFragment
this.getComponent(MainScreenComponent.class) .plusMainFirstViewComponent() .inject(this);Now compile the project and open DaggerApplicationComponent again. You can checkout this commit to get code that was written so far. Please attentively look through this file.
public final class DaggerApplicationComponent implements ApplicationComponent { private ProviderThat is how @Subcomponents work. Forasmuch as subcomponents need to have access to the dependency graph of the parent component there is already a features available out of the box in Java world – Nested Classes. As a result a subcomponent can just reference a Provider implementation from the parent component. Please note that whenever we call a plus method a new instance of subcomponent is returned.
You may wonder, why I am telling you about components and subcomponents and nothing about scopes. Because there is no such term in generated code like Scope. Try to find in generated code any occurrences of “scope”, “PerActivity”, “PerApplication”. No matches? Please try to find differences between Providers that were created in the previous example of Unscoped dependencies and this one.
private void initialize() { this.provideLayoutInflaterProvider = DoubleCheck.provider(ActivityModule_ProvideLayoutInflaterFactory.create(activityModule)); this.provideSupportFragmentManagerProvider = DoubleCheck.provider( ActivityModule_ProvideSupportFragmentManagerFactory.create(activityModule)); // … }There is one small, but important, difference, that could be missed easily from the first sight – DoubleCheck.provider.
Finally, we have found how do all these scope annotations are represented in generated code. Let’s open the DoubleCheck class to what is going on there.
public final class DoubleCheckIf you have any previous experience working with Java, you might be familiar with the technique used in the get() method. It is called – Thread-Safe Singleton. In fact, that is how Scopes are maintained by Dagger 2.
The DoubleCheck class itself is the implementation of the ProxyPattern. It wraps a simple Provider implementation with caching and threading logic, so after first call of the get method you will be provided with the cached instance. There is also Lazy Initialization logic, however this is out of scope of the tutorial.
To summarize:
To consolidate information, let’s break Dagger 2 scopes by improper use of library.
If you are attentive you may have noted that I used ApplicationUnscopedModule, in consequence we have the lack of the DoubleCheck wrapper around this provider. Now let’s see what are possible cases of violating Dagger 2 rules.
Consider the following case, when you called called dependency injection several times and haven’t checked whether component was already created.
@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Inject views injectDependencies(); ButterKnife.bind(this); initUI(); navigateToInitialScreen(); injectDependencies(); } private void injectDependencies() { mMainScreenComponent = mActivityComponent.plusMainScreenComponent(); mMainScreenComponent.inject(this); }In that case all dependencies that are in the Scope of Main Screen Component will be recreated. For instance if we add a dependency into the MainScreenModule as follows.
@Provides @PerScreen public MainScreenDependencyContract provideMainScreenDependency() { return new MainScreenDependency(); }And call the injectDependencies method twice, two instances of MainScreenDependency will be created and only the last one will be left.
Less obvious situation may happen when you place component creation into a not proper lifecycle method, without checking existence of the component.
@Override protected void onResume() { super.onResume(); injectDependencies(); }Activity can be hidden or showed any number of times during it’s lifecycle, for instance when the Activity is overlapped by another Activity, it could be Camera, File Chooser, etc.
Another possible problem that you may encounter is not related to DI or Dagger 2 itself, but it is important to keep this in mind, especially when working with Android SDK with a large number of callbacks and listeners. Consider the following interface.
public interface GlobalEventNotifierContractFor instance, we may be interested in some global application events, like push messages, etc. Implement this interface in Dagger2ScopeInternalsApplication
public class Dagger2ScopeInternalsApplication extends Application implements ProvidesComponentAnd in activities we are adding ourself as a listener for events.
public class MainActivity extends BaseSingleFragmentActivity implements ProvidesComponentAdditionally, add the same code to the SecondaryActivity class. Run the application and try to navigate back and forth between activities. What do you think happens right now? Open Profiler and explore memory.
Memory is allocated but not released. Let’s examine what listeners do we have registered.
Something is probably wrong here :). All in all, be very careful passing callbacks and listeners, don’t forget to unsubscribe.
Do not think that the @Singleton annotation magically makes your dependency a real Singleton object. This annotation due to it’s name may cause confusion. Dagger just uses this annotation for several checks. This annotation is defined in the javax.inject package (JSR 330), not in the Dagger library, that you may already met this annotation in Java.
Try to change the scope of SecondarySecondViewComponent and all related modules to @Singleton like that.
@Provides @Singleton public SecondarySecondFragment.FirstFragmentRouter provideFirstFragmentRouter(Activity containerActivity) { return (SecondaryActivity) containerActivity; } @Provides @Singleton CarPartsDataRepositoryContract provideCarPartsDataRepository() { return new CarPartsDataRepository(); }Compile the application and open the DaggerApplicationComponent, try to find differences. Eventually, you won’t find any real Singleton stuff except DoubleCheck, Dagger 2 treats @Singleton annotation in the same way as any other scope annotation.
In conclusion, let’s look at a comment from Dagger 2 source code.
*In order to get the proper behavior associated with a scope annotation, it is the caller’s * responsibility to instantiate new component instances when appropriate. A {@link Singleton} * component, for instance, should only be instantiated once per application, while a * {@code RequestScoped} component should be instantiated once per request. Because components are * self-contained implementations, exiting a scope is as simple as dropping all references to the * component instance.
Generally speaking, I do not suggest you to use the @Singleton annotation if you have just started working with Dagger 2, it will be much better if you try to create your own scope and learn more about scopes instead of blindly using @Singleton everywhere.
I hope that after reading this article you have got better understanding of Scopes and what stays behind the scenes of Dagger 2 dependency injection.
Dagger 2 is a great tool that will make you application much more flexible and maintainable if applied correctly. Don’t hesitate to look into internals of implementation that will help to use a tool in a right way.
If you have any questions or suggestions please leave them below in comments.
The post Understanding Dagger 2 Scopes Under The Hood appeared first on Oleksandr Molochko.
]]>The post Understanding and using Xdebug with PHPStorm and Magento remotely appeared first on Oleksandr Molochko.
]]>Xdebug is a great way to eliminate var_dump(), print_r() statements and make your PHP debugging experience better and more productive. When you have a simple application there is not really need to spend time on configuring Xdebug, you can go along with dumping varialbes, however in such applications and frameworks like Magento it is really hard to debug using var_dump statements. This is caused by a huge code base of the Magento platform as a result use of caching everywhere where it is possible or not. Furthermore, tracing a path of a function call can be a really tedious task. Using Xdebug together with PHPStorm makes debugging much easier.
There are lot of articles out there, however I remember the time when I have tried several times to configure Xdebug, unfortunately all attempts failed because of the lack of understanding how all these tools work under the hood.
The main intention of this article is to explain how does XDebug work and what problems I’ve encountered while configuring it for my projects. Personally, I always try to understand how something works under the hood and I think that worth it in any case.
I won’t cover all benefits and features of XDebug, there is enough information already provided. One important reason to start using XDebug is debugging in production. In case of using var_dump(), print_r() statements you are not only providing visitors a ton of scary information, but you are also risking to have your sensitive data be exposed. For instance database credentials.
First of all we need to understand how does the XDebug debugger work. Xdebug is a PHP extension, that is loaded dynamically as a shared library (.so), therefore before using it you need to compile it. We will see later how to compile and install this extension.
XDebug differs from other debugging tools you may be familiar with. XDebug works over the protocol that requires your local machine to listen for incoming connections from a server where an application you are debugging is located. You may already have used debugging tools that simply connect to a remote server or a process of your application. But in case of XDebug it works other way round, it is similar to the concept of “callback” in programming languages. Lets explore this communication step by step.
There is a nice illustration that will help to understand the flow better.
To sum up and get better understanding here is one more useful illustration for the flow.
If you are curious, as I am, about these steps, how everything works under the hood, let’s explore them deeply but without concrete configuration for now, I will show a real configuration a bit latter.
As I have already mentioned, XDebug works in the way that the part that wants to debug application should listen to incoming connections.
Let’s follow the naming convention and call the guy that listens for back connections from a remote server – XDebug Client. Generally this may be called a server, as far as it is listening for connections from a remote server.
Consequently, on your local machine you need to start the XDebug client on your local machine in your code editor. I prefer using the PHPStorm IDE, but you are free to use any other editor, like Sublime Text 3, Brackets, etc. But at first ensure that your editor already has an XDebug plugin available.
Later, I will explain how to configure PHPStorm, but the final result should be the same, you have to have the XDebug client listening to a port, usually it is 9000. As a result you should be able to verify it for instance using the following command on Linux.
netstat -tulpn | grep “9000”Or in case you are a Mac user the following command can be useful.
sudo lsof -i -n -P | grep “TCP.*:9000”In my case I got the following output.
Alexanders-MacBook-Pro:~ crosp$ sudo lsof -i -n -P | grep “TCP.*:9000” Password: phpstorm 45069 crosp 23u IPv4 0x9bc0aed7d39b0659 0t0 TCP *:9000 (LISTEN)You may have a question regarding this matter. How to listen for incoming requests if you have a dynamic IP address or your network behind a firewall. The answer is to use SSH port forwarding, we will see later how to forward ports.
To start debugging, considering that a server needs to connect to our machine, we need to inform the server about our intent. This can be done in the several ways:
Here is how it looks like in the request headers. You can see that the XDEBUG_SESSION cookie is being sent with the request.
To disable debugging session you have to delete the cookie.
As the title states, in that article I will show how to debug a Magento project, however you are free to use any kind of PHP project. Magento is the CMS that really requires debugging, because of enormous codebase and not trivial request handling.
I won’t show how to install the PHP interpreter and assume that you already have PHP up and running.
In my case I have the following configuration:
Server OS – Linux Debian 9 HTTP Server – Nginx 1.14.0 PHP – PHP 7.0.27 FPMAs it was already mentioned, XDebug is a native extension for PHP and provided as a shared (.so) library. Hence, you need to compile and install this extension at first. The most easy and fast way I have found is to use PECL.
To install XDebug using PECL type the following command in the terminal.
pecl install xdebugAfter that you will see compilation process and when it is done you should see the path of the compiled extension. On my server I got the following output.
Build process completed successfully Installing ‘/usr/lib/php/20151012/xdebug.so’ install ok: channel://pecl.php.net/xdebug-2.6.1 configuration option “php_ini” is not set to php.ini location You should add “zend_extension=/usr/lib/php/20151012/xdebug.so” to php.iniNext, you need to enable the XDebug extension by modifying the php.ini configuration file. In my case I had to create a config file located at /etc/php/7.0/fpm/conf.d/20-xdebug.ini. But configuration options, of course, are the same.
The XDebug extension has a lot of configurable options, they are described in details on the official page. For my needs I used the following configuration.
zend_extension=/usr/lib/php/20151012/xdebug.so [Xdebug] xdebug.remote_host=localhost xdebug.remote_connect_back=0 xdebug.remote_enable=1 xdebug.remote_port=9900 xdebug.idekey=”PHPSTORM” xdebug.remote_log=”/tmp/xdebug.log” xdebug.remote_handler=dbgpI have to explain some of the options above.
xdebug.remote_host – is set to localhost, as far as we will use port forwarding to our local machine further. You can set it to a specific domain or a IP address.
xdebug.remote_connect_back – is disabled by default. This option allows to dynamically connect to an IP address that requested start of the XDebug session. But this option won’t work in most cases, as far as mostly your machine will be behind firewalls, have a dynamic IP address and ports closed. Considering all these problems, the best option is to use port forwarding.
xdebug.remote_port – I have used not a default port number, in order to make it more explicit when we will configure port forwarding.
xdebug.idekey – The value of this option defines the key that should be pass while initializing the XDebug session as was previously discussed. In my case the value is PHPSTORM.
Other options are self-descriptive, and for additional information you can refer to the official page.
In on order to apply changes restart the PHP service, if you have the FPM type of course. Now verify that the XDebug extension has been successfully enable, for instance using a well-known function – phpinfo(). You should get a similar output.
Now it’s the time to configure our IDEA, PHPStorm in the context of this article. Before starting, please make sure that you have an exact copy of your files on your local development machine. This is required for a correct work of the debugger.
Open your project. Open settings in the IDEA and search for xdebug. Set the following options according your requirements. Please note, that even that we have set the port number to 9900 on our server, however we have 9000 on the local machine. You will see in a while why. But please check twice that there are no local process already using 9000 port. If you have PHP-FPM locally installed it will use this port, so you have to use another one.
Now search for DBGp and set the settings according to the previous configuration. This configuration is required for multiuser debugging. If you are the only one person debugging the application, you may skip this step.
One more thing you can configure is to create a Remote Debug configuration. I have set it up as follows.
You can run this debug configuration to start listening for configured port, however personally I haven’t found any difference between this configuration and the regular configuration.
Finally, we are up to start debugging, but before that we need to configure SSH Port Forwarding. I won’t explain this concept deeply, you can find a lot of articles about port forwarding.
However, for better understanding here is a picture how it works.
So basically you are redirecting all requests from you server that come to port 9900 to the local machine on port 9000 through the SSH Tunnel. To setup the forwarding you can use the following command.
ssh -R 9900:localhost:9000 your_ip_hostname_or_config_aliasEnsure that you have no error messages, for instance if on your remote server the port is already used, the port forwarding fails. In my case I had the following message Warning: remote port forwarding failed for listen port 9900
As you can see, the remote port is set 9900, but our local port that PHPStorm debugger listens to is 9000. You can freely change any of these ports by changing the configuration explained above. Such type of communication is firewall sustainable and much more secure.
To init the XDebug session, as was mentioned earlier, you need to set the cookie. To do that there is a great extension for the Chrome browser – XDebug Helper. It do this work for you, also it is capable of tracing and profiling.
There is no magic behind this extension, it just sets the cookie and unsets it when you stop debugging. You can have a look at source code of the extension.
if (status == 1) { // Set debugging on setCookie(“XDEBUG_SESSION”, idekey, 365); deleteCookie(“XDEBUG_PROFILE”); deleteCookie(“XDEBUG_TRACE”); }Now set breakpoints on the line you are interested in. For example I will a breakpoint in the pub/index.php file.
To start listening to incoming debug connections. You can do this in several ways. You can just click on this button in PHPStorm
Or you can use the preconfigured remote debug configuration.
After all these steps enable the extension in your browser on the web page of your application, by clicking on the green bug icon.
Finally, refresh your page and after some time you should see a familiar debug window appeared in PHPStorm. Also you may be asked about accepting debug connections.
Now you are able to debug your application step by step without a single var_dump
In that article I tried to explain how does XDebug works, I think this is crucial for using it. Furthermore, it is always worth it to understand how things work under the hood.
The configuration described in that article is only one possible way, that works well in my case. If you have other situation, you may need another configuration. Anyway, if you have problems with configuring XDebug, please feel free to ask below in the comments.
The post Understanding and using Xdebug with PHPStorm and Magento remotely appeared first on Oleksandr Molochko.
]]>The post How to Override Module Templates and Classes in Prestashop 1.7 appeared first on Oleksandr Molochko.
]]>Overriding modules is a really powerful feature of the Prestsahop CMS. It makes possible to change behavior and appearance of modules without altering original source code. However, due to latest changes in Prestashop 1.7 a lot of changes were made and some features are gone forever. Therefore this article is intended to explain the most widely used cases and understand how does the Override system work in Prestashop and what overriding techniques are still available.
One important note is that starting from Prestashop 1.7 overriding is considered a bad practice. According to this article, overrides still be available, even so a lot overriding features are already missed in 1.7.X versions. For instance you cannot override core classes anymore by putting your classes in prestashop-root/override/classes/module/Module.php like that.
Only classes for translation can be overridden this way. Have a look at the getFileToParseByTypeTranslation method in AdminTranslationsController file
Before starting any modification and creation of source files enable Debug Mode from your Admin Panel. Advanced Parameters -> Performance -> Set Debub Mode to true . Also you may need to enable recompilation of template files in case you are going to change layout. Alternatively, you can delete the cache/class_index.php file
Throughout this article I will be working with a default module Main menu. Of course you can apply all these techniques for other modules.
When you are trying to find how to implement something using a CMS, usually you refer to the official documentation. But, based on my personal experience it may take much more time trying to find an explanation of some part of an engine by googling or reading the official documentation. For that reason I prefer grepping source code. The most valuable benefit you get by using this method is understanding how everything works under the hood, what architecture is used, how some feature is implemented. After exploring several software products, you will get an idea how something could be implemented, what are pros and cons, as a result you can use acquired knowledge while building your own software products.
In order not to break most of available modules, Prestashop 1.7 still supports overriding for modules. As far as we are interested in overriding modules behavior, I encourage you to have a look at the Module class that is located at prestashop-root/classes/module/Module.php.
At first let’s find out how to override a module’s view appearance. Find the _isTemplateOverloadedStatic method
As you can see from the code above there are different possible patterns that can be used to override a module’s template files. In my case I have created a file at the following path prestashop-root/themes/my-theme/modules/ps_mainmenu/ps_mainmenu.tpl.
To modify behavior of a module’s class you have to put your modified class at prestashop-root/override/modules/NAME_OF_MODULE/class_to_override.php. And your class should have name defined according to this pattern {NameOfOriginalModuleClass}Override. For example override/modules/ps_mainmenu/ps_mainmenu.php and the class name should be Ps_MainMenuOverride. To understand how does the CMS process module classes overrides, find the coreLoadModule method.
As you could see the implementation is quite straightforward. And according to these rules our class might look as follows.
From the code above you may notice that these functions are defined as static. PHP has a feature called Late Static Bindings, but anyway I don’t think it should be used. Furthermore, the Module class file has more than 3000 lines of code. Most of functions are static making it more like an utility class rather than an OOP entity. This class is some kind of the God object anti-pattern. A possible solution could be implemented by separating functions like loadModule, loadTemplate into a separate class called ModuleLoader/ModuleManager. Surely there might be a reason for such decision.
All in all Prestashop 1.7 has breaking changes for a developers who leveraged the overriding feature extensively. According to the prestashop blog posts it seems that the team want to improve an architecture of the system, this is definetly commendable, but any major changes should be added step by step leaving backward compatible options for developers.
If you are still trying to find a way to override the CMS core classes without direct modification of source code, you could probably find a workaround, for instance:
I hope this article will be helpful for understanding how override features works under the hood, if you have any questions please feel free to leave comments below.
The post How to Override Module Templates and Classes in Prestashop 1.7 appeared first on Oleksandr Molochko.
]]>The post Efficient, Fast and Scalable WordPress development with Timber (Twig) appeared first on Oleksandr Molochko.
]]>Is there any reason to use Twig with WordPress? Do you like PHP code snippets inside your HTML layout templates, can you open any large template and find a part you are interested in within a few seconds? If you can, you probably have a small template file.
When I first started using CMSes written in PHP, I was unpleasantly surprised by the way they produce HTML code. I tried one CMS and then another, however found the same situation, HTML and PHP code snippets were tightly coupled and mixed in a chaotic way. If you have a template file containing less then 200 lines of code it may seem not so scary, but in case of large projects you may end up spending hours trying to found unclosed tag, bracket or quote. Furthermore, WordPress has introduced a really bad practice – opening a tag in one file and close it in another one.
I have noticed that sometimes newbie developers don’t understand how does the HTML/CSS/JS frontend trio relate to “server side languages” like PHP. Why do I put the server side word into quotes? Because, to be honest there is only one thing that makes a language to be a server side, is ability to handle requests coming from the client-side. On the low level it means that a language has built-in features for opening sockets and handling requests of a specific protocol, in our case HTTP. As you may already be aware about Node JS, this a good example of a language that initially was used only inside a browsers’ virtual machine. We have a standard specification called ECMAScript. Now it i implemented on the server side and used quite heavily for different kind of applications. NodeJS has a great asynchronous I/O and request processing.
Why I am telling you this? In order help you grasp an important idea. There are a lot of different languages that can be used for producing HTML code, they all have different syntax, features and anytime you mix frontend markup languages your are cornering yourself into boundaries, since you will have to change a lot of code if the backend technology changes. To avoid such problems you have to keep your templates (layout) files decoupled from your server side technologies. Template engines usually use some sort of DSL that is processed by a template engine itself. In future articles I will show how can you create a web site (powered for example by WordPress) from HTML layout without making a single change in HTML files.
Twig is a template engine for the PHP programming language that is built to boost your development experience and save a lot of time and spare your nerves. It will bring new advantages in your development process and help to eliminate spaghetti code. Twig is becoming even more popular and widely used, for instance, Twig is the core template engine in Drupal 8. Opencart starting from 3.0.0 version is also leverages the Twig template engine. I guess that WordPress starting from 5.0.0 also will use Twig or some other template engine.
Lets have a look at some sort of before-after example.
Without using Twig.
This is not the best example, but even so you can see that Twig provides much more readable and concise syntax than PHP echo statements.
Twig is a really powerful and feature-rich template engine, you can find a lot of articles explaining it’s features in details. This article is intended to show how could you use Twig to make your WordPress development experience a little bit better and faster.
Twig itself is a template engine for PHP, you may have a question, why do you need another plugin to start using Twig. The answer is – for your convenience. There is a great plugin called Timber. This plugin provides a bunch of useful functions that are closely related to WordPress. For instance, for retrieving posts, pages, filters, custom attributes, etc. You will have a chance to look at some features in the article or you could refer to the official documentation for more details. So install the Timber plugin and activate it.
Let’s get started. In this tutorial I will use code from a real project. We will use an existing default theme Twenty Seventeen, rewriting it using Twig, also changing some parts and structure of the default theme. I won’t cover a whole process in this article, because it requires a series of articles and doesn’t make big sense.
First copy the default Twenty Seventeen theme and rename directory according to your project name, in my case it will be incresive. Then, create a new directory and call it templates. We will put all twig template files here. There can be a long discussion regarding a directory structure used for a project. Generally, I prefer the package by feature approach, however in that case there are no components/modules/features that have pronounced boundaries. Anyway, we will try to keep our codebase clean and extendable.
Another directory we will have is page-composer. It will contain PHP classes that are responsible for collecting data to be displayed on a page.
—To be honest, WordPress is not a framework/CMS that is written considering OOP principles, it has some classes, however mostly they don’t make sense in the OOP context. There is nothing wrong with that, there are reasons for such decisions, for example backward compatibility, speed and ease of understanding. Following “true” OOP principles may introduce more disadvantages than advantages. All OOP paradigms, principles are usually ovewhelmed by a lot of abstraction, Inversion of Control and boilerplate code. As a result that dramatically complicates the learning curve and requires implementing additional mechanisms, like caching to have the same performance.
A Page Composer is a class (an instance of the class) that collects all necessary data for rendering page, after all data is fetched the flow is transferred to Timber, for rendering the page. It is an attempt to even more decouple details from domain and make our theme more OOP driven.
Let’s create an abstract class called Page_Composer_Contract. I am using Contract suffix, it is just mine preference for naming interfaces instead of the traditional I prefix. The file is itself is called interface-page-composer.php, following the “WordPress naming convention”.
As you can see from the class above, we have two methods. The render method, from the first sight, you can think that a page composer will render a page by itself, but as you will see later it is used only for delegating the flow of control to Timber. The get_template is used to return a template name for a page.
The Template Method pattern
Why do we need to have a separate method instead of directly passing a template twig file name to Timber? Consider the case when you have a page composer class and you just want to change the template with a new one, if you don’t have such method, you will have to override the whole render method copying and pasting code from the parent class. However, if you have a specific method for returning the name of a template file, we have to just override a single method. This pattern is called Template Method Pattern.
Now let’s define a class that will be used as the base class for all page composers, that will contain common logic. In that case I will show only one case – adding specific classes to the body tag according to a device used by a user. The file name is abs-class-base-page-composer.php, I don’t know whether this naming conforms to WordPress naming conventions, but let it be like that.
context = Timber::get_context(); add_filter(‘body_class’, array($this, ‘detect_device_class’)); } public function detect_device_class($classes) { $this->device_detector = new Mobile_Detect(); $body_class_name = ”; if ($this->device_detector->isMobile()) { $classes[] .= self::CLASS_MOBILE; if ($this->device_detector->is(‘iOS’)) { $classes[] = self::CLASS_IOS; } if ($this->device_detector->is(‘Android’)) { $classes[] = self::CLASS_ANDROID; } } $classes[] = $body_class_name; $classes[] = ‘body-main’; return $classes; } public function collect_body_styles() { // Collect classes for body tag $body_classes = get_body_class(); $this->context[‘body_properties’] = ”; if (in_array(self::CLASS_IOS, $body_classes)) { $this->context[‘body_properties’] .= ‘ontouchstart=””‘; } $this->context[‘body_classes’] = join(‘ ‘, $body_classes); } } ?>I think that the code above is clear. The context property is a variable that stores all data that is passed to a template by Twig (Timber). We will discuss it later in that article.
Finally, everything is ready to start creating Twig templates for our theme. There is a starer theme provided by developers of Timber. I suggest you to have a look at it in order to get understanding of what you are able to do using the plugin and Twig itself.
Initial step is to create a base template file that will contain common parts for any page. Let’s create a file called base.twig and define it as follows.
{% block common_header %} {% include [‘base-common-header.twig’] %} {% endblock %} {% block body_scripts %} {% include [‘base-body-scripts.twig’] %} {% endblock %} {% block header %}We have just defined a skeleton for our pages. I think it is quite clear what is going on here, at least the code is readable. The content of included files are not really important in that case as they will be overridden. Let’s discuss the most important parts.
Now it is time to define a template for front page. There is no magic behind Timber (Twig), WordPress is still working in the same way as before. So for the front page there is a file called front-page.php. Create it if you don’t have it. Remove all content and insert the following code.
render(); ?>As you may already noticed, we need to create a composer class. Let’s do this right now. I will simplify it to show the basic idea and responsibilities. For now, we will have a simple front page that displays several sections:
Now we need to create a template file for the front page and extend from the base template.
{% extends “base.twig” %} {% block header %}For the sake of simplicity, the header section will contain only the title and description of our site. Let’s define the following layout.
Now, we need collect the information in order to display it in the section. Create a method called, for instance, collect_header_section_data() and in that case getting required information is quite simple as follows.
context[‘header’] = $header_data; } ?>We are just using WordPress functions to get data.
In order to show how you can embed the content of a post into a section, let’s create a new post, in that case it will be called “Section About Us”. Make it private and remove from the search results (sitemap.xml). That may be not the best idea to use a separate post for the section on the front page, however in that case we leave pure WordPress functionality. As a result all plugins that use core WordPress functionality, like Translation plugins, will work fine.
{{ about.content }}
To get the post I use the following code.
context[‘about’] = $about_us_data; } ?>Next section will contain three recent posts. Here is a simple layout for that section.
For that section we have used the for loop for iterating through latest posts. Also we have the nested loop through categories. Now let’s see how can we get latest posts in our composer class.
date_link = WordPress_Helper::get_post_date_link($post_object->post_date); $post_object->views_count = WordPress_Helper::get_post_view_count($post_object->id); } return $posts; } protected function collect_latest_blog_posts_data() { $args = array( ‘posts_per_page’ => self::POSTS_COUNT, ‘status’ => ‘publish’, ‘ignore_sticky_posts’ => 0 ); $posts = Timber::get_posts($args); $this->context[‘latest_posts’] = $posts; } ?>As I have already mentioned Timber provides a better integration between WordPress and Twig. Timber library provides a number of useful functions. In that case we are calling the Timber::get_posts() method. That is just a wrapper method that adds some additional conditions and filters. In the code above we are adding a filter handling function that just adds additional info to each fetched post. This filter is triggered by Timber.
Finally, the last section is Contacts. For that section we will use the well known plugin Contact Form 7. Create a form in the admin section and you will get a shortcode code for the just created form. I’ve got the following value [contact-form-7 id=”18″ title=”Contact Front Page”]. The template file for that section is simplified and has the next content.
And to get the contact form, we are using the do_shortcode method as follows.
context[‘contacts’] = $contacts_data; } ?>And finally as we’ve reached the last section and have all collect* method defined we need to create/override the render method, that is actually passing all collected data to Timber.
init(); $this->before_collect(); $this->collect_header_section_data(); $this->collect_about_section_data(); $this->collect_latest_blog_posts_data(); $this->collect_contacts_section_data(); Timber::render($this->get_template(), $this->context); } public function get_template() { return array(‘front-page.twig’); } ?>In this article I have tried to explain and show how you can be a more productive WordPress developer, eliminate chaotic template files, the functions.php file, etc. Using Timber/Twig with WordPress makes it easy to develop, read and extend even a simple wordpress theme. In further articles I will show you how to create WordPress themes without even modifying a single line of a template file created by a frontend developer.
The features explained in this article are only a minor part of the Timber/Twig functionality. Almost every feature that you need for development is already implemented and supported by the template engine. So do not hesitate to learn more about it.
All in all, main ideas are always the same. Keep your layers separated, use abstraction instead of a concrete implementation and keep codebase clean.
The post Efficient, Fast and Scalable WordPress development with Timber (Twig) appeared first on Oleksandr Molochko.
]]>The post How to Unbrick TP-Link WiFi Router WR841ND using TFTP and Wireshark appeared first on Oleksandr Molochko.
]]>TP-Link WiFi Router WR841ND is a very popular router because of it’s price, especially in my country. However this router is able to provide much more features than the stock firmware has. One day I decided to make it more powerful and feature-rich, furtermore I noticed message on the official tp-link page about possibility to install a custom firmware on your own risk. Therefore I decided to install the DD-WRT firmware. Here is one possible usage of DD-WRT firmware.
I flashed the firmware successfully and everything seemed to work, but than I recognized that I had to restore all settings (like port forwarding, mac addresses binding and so on) manually. Fortunately, I had backed up the config file. But this binary file is suitable only for the stock firmware. I had no time for exploring and parsing the binary file, so I decided to rollback in order save my settings in a plain text file and than flash the DD-WRT firmware again. I successfully rolled back, saved my settings, but while updating “web flash firmware” in the DD-WRT web gui, something went wrong and my router got bricked. The router started blinking all LEDS periodically, that is also known as the Boot Loop. The router’s internal interface went up and down periodically as well.
Don’t worry if you bricked your router, there are a lot of ways to get it back to life. Some of the mostly used methods are:
I could easily solder some wires and connect router over UART, but I really didn’t want to tear down my router, so I decided to use the second option.
All tutorials that you could find on the Internet suggest to use constant IP addresses like 192.168.0.68, generally it could be any address. It depends on a boot firmware flashed to your router. Therefore, you should try each IP address from the range from 192.168.0.2 to 192.168.0.68 until you find a proper one…. Furthermore, the router’s TFTP client will look for a file with some hardcoded name, that could be different for every firmware. Of course you shouldn’t try every possible combination, this article’s intention is to help you find exact parameters.
We will find out exact IP address and file name using network sniffer – Wireshark. Using Wireshark we can see everything that happens in our network, all packets going back and forth. Here are the steps to unbrick the router, that helped me.
In this article I’ve shared my experience of debricking my router. The main idea of the article is to show how the recovery process works under the hood and how can you find out the required IP address and file name, that your sick router is looking for. Hope this article help you to fix your router. If you have any questions, please feel free to leave them below in the comments section.
The post How to Unbrick TP-Link WiFi Router WR841ND using TFTP and Wireshark appeared first on Oleksandr Molochko.
]]>The post Clean Architecture : Part 2 – The Clean Architecture appeared first on Oleksandr Molochko.
]]>The Clean Architecture is the term proposed by Uncle Bob, that refers to principles and design practices used for building an architecture for software. It is defined in more abstract way, causing a lot of questions and debates.
This article is intended to explain the most important concepts of The Clean Architecture. Unfortunately Fortunately, this will not be a step-by-step guide. I think that such kind of guides are only applicable in case of technical questions, that don’t require much thinking about a step to be done. All architectural decisions should be well-considered. Some of concepts described in this article may seem absurd at the first glance, however they should make much more sense after you adapt them in your project.
The main idea behind the Clean Architecture is quite similar to architectures and concepts described in the previous chapter (Hexagonal, Onion). I would even say all they about the same. Generally, it is just a set of the most strong and important ideas from preceding architectures. Don’t be naive to assume that the Clean Architecture is the silver bullet. Even if you have grasped the ideas, it doesn’t mean that you could apply it everywhere and result in a dramatic codebase improvement and a project success. Using it without solid understanding why do you need it, just because this topic is ubiquitous and everyone tries to apply this architecture, may lead to even worse results than it would be without applying the principles.
Before we begin exploring the Clean Architecture, we need to understand why do we need it at all and what features should a good architecture have ? What requirements we expect to be fulfilled by an architecture, and generally what do stakeholders and business expect from a software system?
One of the most important and ubiquitous concept that is used almost in every framework is Inversion Of Control. You almost already have heard about this principle, for example working in Java EE world with IoC containers.
One of the most essential ideas in understanding The Clean Architecture is The Dependency Rule. It states that
Source code dependencies can only point inwards. Nothing in an inner circle can know anything at all about something in an outer circle. In particular, the name of something declared in an outer circle must not be mentioned by the code in an inner circle. That includes, functions, classes. variables, or any other named software entity.
The rule doesn’t directly and tightly related to the concept of Inversion Of Control, however applying the Dependency Rule forces us to apply IoC. And that is good point, let’s find out how is it applied in the Clean Architecture by examining the flow of control. You can find the following diagram in the right bottom corner of the initial architecture diagram.
What is it all about? We will transform the diagram a little bit for making more sense and ease of understanding.
Let’s follow this flow step by step. Consider some action like a button click occurred. Don’t pay much attention to different concepts used further, they will be explained in greater details later in the article.
You may also wonder what do these arrows represent? In short, they could be described as follows.
Implements an interface.
Uses an interface implementation, composition, has-a relation.
One more type of arrows is a dashed one.
These arrows shows a real flow of execution, from the programming point of view it could be represented as a stack of function calls or movement of the Program Counter (PC).
To make a final point in understanding this flow let’s look at the possible code implementing this concept. First of all, let’s define a use case, in this case it will be AddProductToCartUseCase.
package ua.com.crosp.solutions.cleanrachitecture.usecase; public class AddProductToCartUseCase implements AddProductToCartInputPort { @Override public void execute(Params params, AddProductToCartOutputPort outputPort) { // Some entities orchestration outputPort.onProductAdded(); } }In the code above we are passing AddProductToCartOutputPort to the execute method, however it could be implemented in other ways, for example the implementation could be injected as a class member. It depends on a concrete project, language, approaches used, etc.
Next we will define AddProductToCartInputPort and AddProductToCartOutputPort interfaces. Please note the package name where all these interfaces are defined.
package ua.com.crosp.solutions.cleanrachitecture.usecase; public interface AddProductToCartInputPort { void execute(Params params, AddProductToCartOutputPort outputPort); public class Params { } } package ua.com.crosp.solutions.cleanrachitecture.usecase; public interface AddProductToCartOutputPort { void onProductAdded(); }A Presenter and a Controller might look as follows.
package ua.com.crosp.solutions.cleanrachitecture.presenter; public class CartPresenter implements AddProductToCartOutputPort { @Override public void onProductAdded() { } } package ua.com.crosp.solutions.cleanrachitecture.controller; public class CartController { @Inject protected AddProductToCartInputPort mAddProductToCartInputPort; @Inject protected AddProductToCartOutputPort mAddProductToCartOutputPort; public void onAddProductClick() { mAddProductToCartInputPort.execute(new AddProductToCartInputPort.Params(), mAddProductToCartOutputPort); } }Herein AddProductToCartInputPort and AddProductToCartOutputPort are injected in the Controller class, however it could be implemented differently, as have been already mentioned. However, using both controllers and presenters may seem weird, this is a contentious question, the primary goal here is to understand the flow.
In real projects a Presenter and a Controller are usually combined, but a View should be a separate guy. Please also note that I am not refering to the Controller from the MVC pattern and to the Presenter from the MVP pattern respectively. They just represent a general intent: to control and to present.
package ua.com.crosp.solutions.cleanrachitecture.controller; public class CartController implements AddProductToCartOutputPort { @Inject protected ProductView mProductView; @Inject protected AddProductToCartInputPort mAddProductToCartInputPort; public void onAddProductClick() { mAddProductToCartInputPort.execute(new AddProductToCartInputPort.Params(), this); } @Override public void onProductAdded() { mProductView.updateProduct(); } }Furthermore, such naming conventions probably should be avoided.
The main idea of the described above is that higher level policies should not depend on details, frameworks, UI, etc. As you may have noted, our AddProductToCartUseCase operates only with interfaces that are defined on the same level as the use case itself. Here comes the power the Dependency Rule represented by inversion of control, dependency injection, dependencies inversion. We can swap an implementation at any time, it should only conform to the defined interface, so our high level rules and policies are isolated from the things that are constantly changing, making the application core stable.
It is the time to understand this diagram. From the first sight it might look weird, because most traditional architectures are defined from top to bottom. Let’s understand this diagram, but before exploring it, it won’t hurt to show the flow of execution on this diagram.
You may wonder what does this curved line represent and why does it pass boundaries back and forth several times. This is just a possible flow when you have a use cases that requires multiple details to perform some action. The term details in this context means any tool (library, db, delivery mechanism, framework, device) that exists on the outermost layer.
For example, when an order is being confirmed, this action requires a lot of steps to be done, like generating invoice, adding history, notifying managers, checking product availability, whatever. Therefore the code execution flow will look something like that curved waved line. Please note that this line doesn’t represent dependencies relationships, dependencies must only point inward like that.
Finally, I will transform the initial diagram to a more common shape for understanding.
This flow is almost present in any application.
I hope these diagrams will help you to understand the Clean Architecture approach better. Now let’s examine each layer separately. We will start from the outermost ring and then will move step by step to the core layer.
This is the outermost layer on the diagram. It will be probably the largest layer of any system, since there is a variety of devices, libraries and frameworks. They are constantly changing, therefore we should abstract high level (inner layer) policies from details. Let’s go through some of them.
I guess thats enough examples, the main idea should be clear as far as things on this layer change very frequently we should be protected from this changes by the wall of abstraction and inversion of control. This will not only make a development process much easier, but will also isolate possible bugs. Furthermore, this layer is usually hardly testable because of its changing nature and instability.
This circle layer is named as Interface Adapters in the original article. That means that all data that is passed to outer or inner layers should be transformed here to a convenient structure for a layer being passed to. For instance, data that is passed to a View primary should contain only String fields, so the view could just display it without any extra work.
You should be already familiar with such design patterns like MVP, MVC, MVVM. I won’t explain each of these design patterns. The concrete one is selected according to project requirements, type of a project and other factors. This layer is just a glue between the outermost details level and the application layer, in that case represented by the UseCase layer. Roughly speaking, the letter M (Model) in the Clean Architecture is embodied by the Use Case layer, but not by Entities or other Core Domain objects. Or the Model could be just the data passed back and forth between the UseCase layer and the Interface Adapters layer.
Uncle Bob considers the Views under this layer, however personally I don’t understand what is the reason for that and would put the Views into the outermost layer. If we are talking about a view interface, it could be defined in the Interface Adapters layer.
The next guy that lives on this layer is the Gateway. Generally gateway is just another abstraction that will hide the actual implementation behind, similarly to the Facade Pattern. It could a Data Store (the Repository pattern), an API gateway, etc. Such as Database gateways will have methods to meet the demands of an application. However do not try to hide complex business rules behind such gateways. All queries to the database should relatively simple like CRUD operations, of course some filtering is also acceptable.
This and any further layers should be testable in contrast to the Details layer, that is usually is really hard to test.
Now things start to get interesting. The Use Case layer is not as trivial for understanding as already discussed layers. And very often there are a lot of misunderstandings about the purpose of this layer.
A Use case is a list of actions and communication steps between a role and an automated system that are required to achieve a goal. This definition contains some important points that are worth detail investigation. A set of use cases is how a user sees a software system from the functionality perspective.
I guess that everyone is aware of the Use Case Diagram. It is used on the first steps of a software system design. The more defined use cases cover possible system operations, the easier it will be to design a proper architecture and choose a development process strategy and methodology.
From the technical point of view, that we are most interested in, a use case mostly is an orchestration of Entities. Entities will be discussed in a while, for now you need to know that they contain Critical Business Rules and dance to the tune of use cases, but according to the business rules. Also there is a term called Interactor. A Use Case and a Interactor are related terms and usually used interchangeably. I think, it would be better to say that an Interactor object implements a Use Case of a system.
The software in this layer contains application specific business rules. It encapsulates and implements all of the use cases of the system.
What does application specific mean in that case ? Application specific rules can be changed if application requirements change. An application in this case is an automated system. For example consider a bank as the core business set of rules. An ATM application will have it’s own rules, a web system for personal account management will have other rules, a mobile application also may have different rules. That’s why these such rules are application specific.
I have found a great use case example in this article. This is a use case of withdrawing money from an ATM.
I thinks that is a really good example of separation between Application Specific Rules and Critical Business Rules. Critical Business Rules that belong to the Entities layer are bolded in the use case scenario above. Personally I would put one more step to the business rules, however the idea should be clear.
Usually a use case have some input data required and output result is returned when a use case completes. The result can be represented not only by data directly returned, but a deferred event like a callback function call or just some indication of success. Therefore let’s modify slightly the use case, adding input and output data.
As you can see I have added input and output fields to the use case description. They are specified more from a user’s perspective, not as arguments and the return value for the execute method from an interactor implementation class (for instance).
Usually a use case is a single atomic action. Sometimes developers are mixing multiple related use cases into a single interactor/use case class (as I have done here). This is done to eliminate boilerplate code, however this approach violates Single Responsibility and Separation of Concerns rules. Personally I prefer to keep use cases separately.
Documenting all use cases of a system
Why is it crucial to create use case diagrams for a system being developed ? Have you had a situation when a new developers comes into a project and asks experienced project members about some business rules and use cases of a project ? Even experienced members of a team may have some hesitations before making some changes or bug fixing, caused by non trivial use cases and policies.
Considering all these problems, I suggest to document all possible use cases of a system that users expect from it to perform. Firstly and most importantly you will have a collection of business rules that your client can understand, discuss and change. Secondly, you have a single data store that contains all (at best) possible system use cases and business rules, as a result new members (both developers and stakeholders) have to spend much less time than it would be if they elicited all these information from other team members. Finally it will be much easier to design future development of a system, without even diving into code.
Use cases could be documented in a form of diagrams, scenarios (as shown above), steps & conditions, any form that is convenient for a project, team and stakeholders.
The last and the most important layer from the architecture perspective is the Entities or the Domain Layer. Entities encapsulate Critical Business Rules. Critical Business Rules are rules that support a business existence, but not necessarily a software system presence.
The last and the most important layer from the architecture perspective is the Entities or the Domain Layer. Entities encapsulate Critical Business Rules. Critical Business Rules are rules that support a business existence, but not necessarily a software system presence.
This layer should be the most bulletproof and external changes shouldn’t affect operation of this layer. Entities usually embodies business rules that even make sense even without any software system or application.
From the technical side an Entity is an object that mostly contains methods and logic, not just data. Entities are not DTOs, DAOs.
Designing the Entities layer is not always a trivial task. In case you are developing an enterprise application you will probably spend a lot of time architecting this layer and errors this layer may have really serious consequences. There is a set of really valuable rules called Domain Driven Design. These concepts aims to ease development of large enterprise applications with complex business rules. In case you are not developing an enterprise application, these concepts still are worth your attention. Therefore I encourage you to study the Domain Driven Design approach.
What about simple applications that are not a part of enterprise ? The Entities Layer should contain most general and high-level rules and policies that should not be affected by the changes in upper layers.
Entities must not persist their own state in a database. There are a lot of ORM libraries and framework that impose direct mapping of Entities to the database, however such approach violates the Dependency Rule. For example when a base Entity class extends some Database model class it is still aware about database and other low level details. Some frameworks apply Inversion of Control to solve this problem, other use bytecode weaving, generally speaking Entities should not be aware about existence of a database.
One tantalizing question is the difference between Use Cases and Entities layers. When to use one over another ? Here is some comparison between two layers as I understand them.
In this article I’ve tried to share my experience of understanding and applying the Clean Architecture principles. All described rules and concepts may be really valuable if used correctly. Otherwise they may cause much more mud and spaghetti in your project. Therefore novice developers should get enough experience before applying these rules.
These rules should not be strictly followed, I would say that the Clean Architecture is just a set of practices that should give an idea about further decisions and design steps regarding a software shape. Every project requires a unique approach that should be made by an architect based on a lot of factors.
If you have any questions regarding the article, please feel free to contact me or post comments below, discussion is always a great way to reach a consensus.
The post Clean Architecture : Part 2 – The Clean Architecture appeared first on Oleksandr Molochko.
]]>The post How to download images using WebdriverIO (Selenium Webdriver) appeared first on Oleksandr Molochko.
]]>This will be a really short article explaining how to download image from the browser side while running tests using the WebdriverIO library.
I have a use case scenario, that requires to verify that a loaded page has the expected image. The main problem that in order to save image we have not so many options:
The only one method I have found working for this case is the third one. But the issue here is that network communication, IO operations are executed in an async manner in order not to block the UI thread. Let’s start by creating a simple script that will just download an image by a URL and convert the downloaded image into a Base64 string.
window.downloadImageToBase64 = function (url, callback) { // Put these constants out of the function to avoid creation of objects. var STATE_DONE = 4; var HTTP_OK = 200; var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function () { // Wait for valid response if (xhr.readyState == STATE_DONE && xhr.status == HTTP_OK) { var blob = new Blob([xhr.response], { type: xhr.getResponseHeader(“Content-Type”) }); // Create file reader and convert blob array to Base64 string var reader = new window.FileReader(); reader.readAsDataURL(blob); reader.onloadend = function () { var base64data = reader.result; callback(base64data); console.log(base64data); } } }; xhr.responseType = “arraybuffer”; // Load async xhr.open(“GET”, url, true); xhr.send(); return 0; };You can check this function, it should print a base64 string representing an image to the console and send the result back to the callback if it was provided.
As you can see there are two async calls in the function, the first is the XHR request and the second one is reading blob data using FileReader. That’s why we cannot get the response immediately.
I think it is always worth knowing how do things work under the hood. All communication with between client libraries and browser drivers is done via the JSON Wire Protocol. Therefore all data is sent in the plain JSON format. This can help you to understand how all this works internally and debug issues. You can use the RequestDebug library to log all request coming back and forth.
The webdriverio library provides two methods for executing scripts on the browser side, execute and executeAsync. The former function is synchronous, so the result of evaluating the script is returned to the client after the script execution is finished. In our case it this function won’t work because of async calls in the downloadImageToBase64 function.
The executeAsync function is exactly what we are looking fore. The last argument of the function is a callback function that should be called from a script being executed on the browser side when it completes. Before we can use executeAsync we need to set the script execution timeout. According the documentation it should be set to 30 seconds by default, however in my case it doesn’t even wait for a couple seconds and the following error is thrown: Error: asynchronous script timeout: result was not received in 0 seconds.. You can set the timeout as follows.
client.timeouts(‘script’, SCRIPT_EXECUTION_TIMEOUT)We need to inject our function in the context of a browser used for running tests in order to use it in a script later. Of course you can just pass all functions in a single executeAsync call, but I would prefer a bit cleaner way. In my case I need several helper functions that are used on the browser side, so I have defined a custom command that sets all these utils/helpers functions into the global (window) context of a browser.
client.addCommand(“injectHelperScripts”, function () { var self = this; return self.execute( function injectScripts() { window.addListener = function (element, eventName, handler) { if (element.addEventListener) { element.addEventListener(eventName, handler, false); } else if (element.attachEvent) { element.attachEvent(‘on’ + eventName, handler); } else { element[‘on’ + eventName] = handler; } }; window.removeListener = function (element, eventName, handler) { if (element.addEventListener) { element.removeEventListener(eventName, handler, false); } else if (element.detachEvent) { element.detachEvent(‘on’ + eventName, handler); } else { element[‘on’ + eventName] = null; } }; window.downloadImageToBase64 = function (url, callback) { var STATE_DONE = 4; var HTTP_OK = 200; var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function () { // Wait for valid response if (xhr.readyState == STATE_DONE && xhr.status == HTTP_OK) { var blob = new Blob([xhr.response], { type: xhr.getResponseHeader(“Content-Type”) }); // Create file reader and convert blob array to Base64 string var reader = new window.FileReader(); reader.readAsDataURL(blob); reader.onloadend = function () { var base64data = reader.result; callback(base64data); console.log(base64data); } } }; xhr.responseType = “arraybuffer”; // Load async xhr.open(“GET”, url, true); xhr.send(); return 0; }; } ); });Finally let’s create a custom command for wrapping the downloadImageToBase64 function.
client.addCommand(‘getBinaryImage’, function (url) { var self = this; return self.executeAsync( function downloadImageBinary(url, callback) { return downloadImageToBase64(url, callback); } , url); });Now it is time to test our code. Personally I am using Selenium docker images, or rather the Selenium Hub image with multiple worker nodes. Docker always helps me to keep my machine clean. As a result in my case I am using remote connection. Here is the working example for downloading an image in the Base64 format from amazon website.
static downloadImageFromAmazon(url, imageId) { let client = webdriverio.remote(TestRunner.generateWebDriverOptionsChrome()) .init(); client = TestRunner.setCustomCommandsForClient(client); let chain = client .timeouts(‘script’, TestRunner.SCRIPT_TIMEOUT) .url(url) .waitForVisible(‘#’ + imageId, TestRunner.DELAY_VISIBLE) .injectHelperScripts() .getAttribute(‘#’ + imageId, “src”); return chain.then((src) => { console.log(src); return chain.getBinaryImage(src); }) .then((result) => { return result.value; }) .catch(function (err) { console.log(‘Error ‘ + err); }); }Please note that this image is hosted on the other domain than amazon, therefore by default we are not able to execute the XHR request because of the Same-origin policy. Appropriately, you may need to disable security features of the browser. In the event of the chrome driver I have the following configuration.
static generateWebDriverOptionsChrome() { return { desiredCapabilities: { browserName: ‘chrome’, chromeOptions: { args: [‘disable-web-security’] }, loggingPrefs: { ‘driver’: ‘INFO’, ‘browser’: ‘INFO’ } }, logLevel: ‘verbose’, host: ‘localhost’, port: 4444 }; }Now you can try to download a book cover image by a url like that.
TestRunner.downloadImageFromAmazon(“https://www.amazon.com/Clean-Architecture-Craftsmans-Software-Structure/dp/0134494164”, “imgBlkFront”) .then((result) => { console.log(“GOT BASE64 IMAGE”); console.log(result); });In my case I got the following base64 image. Open Developer Tools and get the src attribute of the image.
Here is an utility class that you may need to save a base64 encoded image to a file.
const isPng = require(‘is-png’); const fs = require(‘fs’), path = require(‘path’); class FileUtils { constructor() { } static isValidPngFile(filename) { let buffer = []; try { buffer = fs.readFileSync(filename); } catch (err) { return false; } return isPng(buffer); } static ensureDirectoryExistence(filePath) { let dirname = path.dirname(filePath); if (fs.existsSync(dirname)) { return true; } FileUtils.ensureDirectoryExistence(dirname); fs.mkdirSync(dirname); }; static saveBase64Image(base64Image, filename) { var imageTypeRegularExpression = //(.*?)$/; let imageBuffer = FileUtils.decodeBase64Image(base64Image); var imageTypeDetected = imageBuffer .type .match(imageTypeRegularExpression); filename = filename + “.” + imageTypeDetected[1]; FileUtils.ensureDirectoryExistence(filename); // Save decoded binary image to disk fs.writeFileSync(filename, imageBuffer.data); return filename; } static decodeBase64Image(dataString) { var matches = dataString.match(/^data:([A-Za-z-+/]+);base64,(.+)$/); var response = {}; if (matches.length !== 3) { return new Error(‘Invalid input string’); } response.type = matches[1]; response.data = new Buffer(matches[2], ‘base64’); return response; } } module.exports = FileUtils;In this short how-to guide I shared my experience about dealing with image downloading using the WebdriverIO library. Nonetheless, the main intention of this article is to show how you can use custom commands and scripts execution on the browser side to get data from a page. If you have any troubles or suggestions please leave comments below.
The post How to download images using WebdriverIO (Selenium Webdriver) appeared first on Oleksandr Molochko.
]]>The post Routing network traffic through a transparent SOCKS5 proxy using DD-WRT appeared first on Oleksandr Molochko.
]]>DD-WRT is a great firmware that is developed to enhance the performance and bring powerful features to cheap routers (even < 50$), making them super routers. However one feature is missing by default is transparent proxifying of network traffic through a SOCKS5 proxy server, whereas you can establish a virtual encrypted tunnel VPN to a network directly from a DD-WRT powered router. Personally I have a very cheap router TP-Link TL-WR841ND v8, I have never thought that this device could have all these features that are available now thanks to the DD-WRT firmware. I use SOCKS5 proxies regularly and I need to configure browsers, change system settings every time configuration changes. Even more, I haven’t managed to configure system-wide proxy on macOS, some applications don’t respect proxy settings and send network traffic directly. As a result I decided to configure a transparent proxy/redirector to ensure that all traffic is really forwarded to a proxy server and there is no better place other than a router to control all network communications.
This tutorial will guide you through the process of configuring the network-wide proxy redirector using Redsocks and a router with DD-WRT installed.
In order to complete this tutorial you need basic networking and administration knowledge. Nevertheless I will try to explain each step as detailed as possible. Furthermore you will need the following software/hardware:
As I have already mentioned DD-WRT is a firmware that boost up your router revealing a lot of useful features that are not available with the default firmware installed. Under the hood DD-WRT is a Linux-based firmware, yeah it is a tiny Linux OS running in your router. Surely, this is not plain Linux like you may have running on your desktop, it is modified to satisfy router requirements.
One of the most important features is that DD-WRT comes with a nice, intuitive WEB UI. I enjoy working in the terminal, but you should have a really good memory to remember (because when you are configuring a router you may not have the Internet access) all configuration options and commands that are used to configure a router. However, if despite that you still want to use command line to configure your router, try OpenWRT.
I won’t cover the process of installing the DD-WRT firmware on a router, as far as there is a variety of different routers that support DD-WRT and it is impossible to cover this even in several articles. Hence, if your router supports DD-WRT find a tutorial describing the process of installing DD-WRT on your router. In most cases the steps don’t differ from the official firmware update process.
A transparent proxy – is a server that receives your request and then fetches requested resource, gets the responses and returns the result to you, so this server sits between you and the outer world. The main feature of a transparent proxy is that it doesn’t modify your requests and just sends them to other servers. Mostly these proxies are used to cache requests and usually a client is not aware of using proxy, thus this type of proxy server is called transparent. Here are some common uses of a transparent proxy:
A transparent redirector – is an application that just directly forwards all your packets to a proxy server. It differs from a transparent proxy in not fetching a requested resource, but instead it simply redirects a complete request to a proxy server. Consequently, in it’s turn a proxy server fetches the requested resource for you and therefore your real IP is concealed on a proxy server’s side, not by a transparent redirector like Redsocks.
Transparent redirectors frequently used as a system-wide proxy, all packets in the system are forwarded to a process running locally (or it can be running on the other machine) and a redirector process sends all received packets to a proxy server according the configuration file. This is like a postman or a delivery company carries packets from sender to receiver without modifying the content of a packet (at best).
In the figure above you can see the network topology we are going to build. The topology is quite simple, however it is crucial to understand how does a network packet flows through the network.
Let’s follow the packet flow through the network step by step.
The packet path is relatively long, to reduce the response time, we could implement redirection to a SOCKS5 proxy server directly on the router, but unfortunately my router is not powerful enough, to handle this by itself, it won’t even establish an OpenVPN connection. If you have a more powerful router, you could try to compile appropriate software and configure it to redirect packets directly to a proxy server.
You can find out whether your router supports the DD-WRT firmware on the Supported Device page.
Find and follow the instructions for flashing the firmware on your router. After flashing the firmware you should be able to access the DD-WRT Web UI. In my case it looks as follows
As far as we will execute bash commands, we need a way to enter these commands. You can use the WEB UI to execute commands, but I prefer doing this from the terminal. In order to configure the Remote SSH Management for your router, first of all access the Web Management UI, in the event of my local network it is accessible at 192.168.0.1.
First of all, navigate to the Service tab. Then, scroll down to the Secure Shell section.
In this section enable SSHd. Next, set the port number for accessing the router via SSH, by default it is 22, so if you have port 22 already forwarded, you have to choose another port. After, you can allow using password, however, it would be better to use SSH keys for security reasons. So if you will be using SSH keys, disable Password Login and paste your public key in the Authorized Keys text area. It should be formatted in the following way.
ssh-rsaIf you decided to use the Password Authentication, then the username is root and the password is admin by default.
Save new settings by clicking on the Apply Settings button. Next go to the Administration tab and find the Remote Access section.
Enter the same port number as you have entered in the previous tab and enable the SSH Management option. Then, save settings.
Now you should be able to connect to your router using a similar command, don’t forget to setup you SSH keys correctly.
ssh -pIn case you are a Windows user, you need to use Putty for connecting to your router via SSH. Open Putty and enter the router’s IP address and the port number you have just set.
If you are using SSH keys for authentication, then go to the SSH settings category and then to the Auth category. Select your private key, it should be in the PPK Format. You can use Puttygen to convert SSH keys.
Don’t forget to save the session settings. Now try to open a connection to the router (use root for username), you might be asked to enter a passphrase if your private key is protected.
Now, we need to configure a machine that will have Redsocks running and as a result act as a “proxifier”. You are free to use a real machine, but usually it’s irrationally to use a separate machine just for this purpose. Therefore, I will use a virtual machine. Another great choice would be an embedded computer like Raspberry PI, Onion Omega, Orange Pi, etc. But I would suggest not to use WiFi connection in case of embedded device, but use the Ethernet interface instead.
I have a PC at home with Proxmox installed, so I will create a virtual machine for this purpose and install Ubuntu 16.04 distro there.
Next, before continue we need to ensure that our machine will always have the same IP address assigned. The guy that is responsible for that is called DHCP server and it is running on your router. We need to configure a static DHCP lease to assign a static IP address to a specific machine.
First of fall you need to find out the mac address of your machine. In my case I can find it through the Proxmox WEB UI in the configuration of the virtual machine. After you have the mac address of your machine. Open the DD-WRT UI and navigate to the Services (the Services tab) and find the “Static Leases” table on the page. Click Add and fill in all required fields. In my case I have the following data entered.
Click Apply Settings, you may need to clear the lease cache and request a new IP address from the DHCP server. Ensure that to the correct IP address was assigned to your machine.
Next we need to install Redsocks, a transparent redirector. I have written a complete article about installing and configuring Redsocks. Please follow the instructions, but do not set any iptables rules for now or you may set some rules just to check that Redsocks is working properly.
Please note, that you should set the local_ip property to 0.0.0.0 to be able to connect externally. Here is my configuration file (/etc/redsocks.conf).
base { log_info = on; daemon = on; log_debug = on; log = “file:/var/log/redsocks.log”; redirector = iptables; } redsocks { local_port = 12345; ip = 33.33.33.33; local_ip = 0.0.0.0; disclose_src = false; type = socks5; port = 33333; }One more important point here is that packets will be sent to different ports on the proxifying machine depending on a protocol used, however the Redsocks process listens to one specific port number. Therefore, we need to redirect all incoming packets to the port the Redoscks process listens to. You can achieve this with the following rule.
iptables -A PREROUTING -t nat -i ens18 -p tcp -m multiport –dports 80,443 -j REDIRECT –to-port 12345Now we have reached the most important part of the tutorial, we are going to set iptables routing rules in our DD-WRT powered router.
If credentials are valid, you should see greeting from your router.
You may have different cases and goals, so apply your iptables rules instead. In this article I am going to redirect all HTTP/HTTPS packets from a specific host to the transparent redirector.
First of all we need to know what packets should be redirected through the proxy in our network. We have two available choices to use NAT or the Mangle Table (MARK). Using the NAT method, due to change of destination and source addresses, an IP packet will be modified. This leads to undesirable consequences, some of them are described here. Accordingly, we will stick to the Mangle table option, since in this case packets remain unmodified.
The MANGLE table is used to modify packets, mostly it used to set the Type of Service (TOS) or Time to Live (TTL) fields in the IP header. Another case when it comes in handy is to MARK packets to somehow group related packets, making it easier to process these packets further. For instance make routing decisions, as you will see later in this article. One important thing in understanding the MARK target is that a packet itself is not modified at all, all information about marks is stored in the Kernel memory. If you are lucky (you have specific kernel modules loaded) you may find information about individual packets and connections in the pseudo /proc/ filesystem in /proc/net/ip_conntrack and /proc/net/nf_conntrack respectively.
Let’s set rules to mark only appropriate packets (in my case HTTP/HTTPS packets).
iptables -I PREROUTING 1 -t mangle -s 192.168.0.113 ! -d `nvram get lan_ipaddr`/`nvram get lan_netmask` -p tcp -m multiport –dports 80,443 -j MARK –set-mark 3I think this rule requires a brief explanation.
The next rule is similar to the previous one, but we are jumping to the different target CONNMARK. This target is used to mark a whole connection, whereas the MARK target marks individual packets.
iptables -I PREROUTING 2 -t mangle -s 192.168.0.113 ! -d `nvram get lan_ipaddr`/`nvram get lan_netmask` -p tcp -m multiport –dports 80,443 -j CONNMARK –save-markThe –save-mark target is used to apply a packet mark onto a connection, resulting in the whole connection being marked according the packet.
In addition we need to exclude our public IP from routing through the proxy server. I have faced with the interesting problem. After applying rules above I wouldn’t be able to access local network resources that are available publicly and resolve to my public IP, like someresource.crosp.net. To get the public IP address on a DD-WRT router we can use the following command.
nvram get wan_ipaddrAs a consequence, we will have the following rules for iptables.
iptables -I PREROUTING 1 -t mangle -s 192.168.0.113 ! -d `nvram get lan_ipaddr`/`nvram get lan_netmask` -p tcp -m multiport –dports 80,443 -j MARK –set-mark 3 iptables -I PREROUTING 2 -t mangle -s 192.168.0.113 ! -d `nvram get lan_ipaddr`/`nvram get lan_netmask` -p tcp -m multiport –dports 80,443 -j CONNMARK –save-mark iptables -I PREROUTING 3 -t mangle -s 192.168.0.113 ! -d `nvram get wan_ipaddr` -p tcp -m multiport –dports 80,443 -j MARK –set-mark 3 iptables -I PREROUTING 4 -t mangle -s 192.168.0.113 ! -d `nvram get wan_ipaddr` -p tcp -m multiport –dports 80,443 -j CONNMARK –save-markThe next and last step is to add static routing rules.
ip rule add fwmark 3 table 13 ip route add default via 192.168.0.145 table 13We are redirecting all packets marked with the number 3 into the table numbered 13, and adding the default route through the proxifing machine to the table. 192.168.0.145 is the IP address of the proxifying machine.
All in all, here are all rules or the bash script we need to apply to get packets redirected to the proxifying machine.
iptables -I PREROUTING 1 -t mangle -s 192.168.0.113 ! -d `nvram get lan_ipaddr`/`nvram get lan_netmask` -p tcp -m multiport –dports 80,443 -j MARK –set-mark 3 iptables -I PREROUTING 2 -t mangle -s 192.168.0.113 ! -d `nvram get lan_ipaddr`/`nvram get lan_netmask` -p tcp -m multiport –dports 80,443 -j CONNMARK –save-mark iptables -I PREROUTING 3 -t mangle -s 192.168.0.113 ! -d `nvram get wan_ipaddr` -p tcp -m multiport –dports 80,443 -j MARK –set-mark 3 iptables -I PREROUTING 4 -t mangle -s 192.168.0.113 ! -d `nvram get wan_ipaddr` -p tcp -m multiport –dports 80,443 -j CONNMARK –save-mark ip rule add fwmark 3 table 13 ip route add default via 192.168.0.145 table 13All the rules you set will be lost after router reboot. Under these circumstances, if you need to persist rules there different choices you have. For instance you can do this from the WEB UI, navigating to Administration -> Commands and clicking Save Firewall after you have entered your rules.
When you need to disable proxification, you can use the following rules, or simply reboot a router :).
#!/bin/sh PROXIFYING_MACHINE=192.168.0.145 MACHINE_TO_PROXIFY=192.168.0.113 iptables -D PREROUTING -t mangle -s $MACHINE_TO_PROXIFY ! -d `nvram get lan_ipaddr`/`nvram get lan_netmask` -p tcp -m multiport –dports 80,443 -j MARK –set-mark 3 iptables -D PREROUTING -t mangle -s $MACHINE_TO_PROXIFY ! -d `nvram get lan_ipaddr`/`nvram get lan_netmask` -p tcp -m multiport –dports 80,443 -j CONNMARK –save-mark iptables -D PREROUTING -t mangle -s $MACHINE_TO_PROXIFY ! -d `nvram get wan_ipaddr` -p tcp -m multiport –dports 80,443 -j MARK –set-mark 3 iptables -D PREROUTING -t mangle -s $MACHINE_TO_PROXIFY ! -d `nvram get wan_ipaddr` -p tcp -m multiport –dports 80,443 -j CONNMARK –save-mark while ip rule delete from 0/0 to 0/0 table 13 2>/dev/null; do true; done ip route flush table 13It is the time to test our configuration.
First of all, run Redsocks on the proxifying machine, ensure that the process listens to the 0.0.0.0, not the localhost (127.0.0.1).
root@proxyfier:/home/crosp# /opt/redsocks/redsocks -c /etc/redsocks.conf root@proxyfier:/home/crosp# netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 977/sshd tcp 0 0 0.0.0.0:12345 0.0.0.0:* LISTEN 1174/redsocks tcp6 0 0 :::22 :::* LISTEN 977/sshdNext set your iptables and routing rules on your router.
Finally, check whether your traffic is proxified now, for example by executing this command on your machine that you have set in the rules.
MacBookCROSP:~ crosp$ curl ‘https://api.ipify.org?format=json’ {“ip”:”22.22.22.22″}In case of successful configuration you should see the IP address of a proxy server you have used while setting Redsocks.
If you need to persist the proxy scripts and do not want to reconfigure all rules anytime the router reboots, then you have multiple choices to achieve this. Here is an article describing all possible ways to create startup scripts. However, in my case I have no option to enable JFFS on the router, since it has restricted hardware resources. There are some advanced options and workarounds to enable JFFS on routers with poor hardware, however this requires some time and effort to get it working.
Therefore, I need to use other methods, like NVRAM to save startup scripts. There is a great number of different routers, each has a different hardware capabilities, consequently I am not able to describe all methods. Try to find a suitable solution for your router or a similar one.
In this article I’ve described how to configure network traffic proxification using a DD-WRT router with Redsocks. There are some important benefits of this approach. First of all, you can transparently use proxy for a single machine, multiple machines or even a whole local network. Secondly, using the mangle table prevents some problems, furthermore packets are not modified and NAT is not used. You can try to create more complex topologies and in case you have more powerful router, you can try to move some software directly on your router doing that will speed up network connections dramatically.
I hope this tutorial was useful for you. If you have any troubles with setting up this configuration, please feel free to leave comments below.
The post Routing network traffic through a transparent SOCKS5 proxy using DD-WRT appeared first on Oleksandr Molochko.
]]>The post How to install and configure Redsocks on Centos Linux appeared first on Oleksandr Molochko.
]]>Redsocks is the tool that allows you to proxify(redirect) network traffic through a SOCKS4, SOCKS5 or HTTPs proxy server. It works on the lowest level, the kernel level (iptables). The other possible way is to use application level proxy, when the proxy client is implemented in the same language as an application is written in. Redsocks operates on the lowest system level, that’s why all running application don’t even have an idea that network traffic is sent through a proxy server, as a result it is called a transparent proxy redirector.
If you are reading this article, you should probably have an idea why do you need to use a proxy server. Furthermore, you should be acquainted with basic proxy terms and definitions in order to understand everything described in this tutorial. It wouldn’t hurt to have some Linux administration skills as well.
For this tutorial I will be using Centos 7 Minimal installation, it has only the most essential applications installed. But you are free to use any distro, you may skip some steps then.
Before start installing Redsocks, I think it is always worth to know how something works internally. And it will help to understand this tool better and therefore troubleshoot issues.
First of all, Redsocks leverages features provided by the Linux kernel firewall (Netfilter module)
I hope this diagram will help you to understand the flow of a packet while using Redsocks.
Here is a brief explanation how does a packet get redirected to Redsocks.
First and foremost, you need to update repositories and installed software on the system.
yum updateIn order to install Redsocks we need to compile it at first. Change into a directory you want to keep source code in.
cd /opt/By default the git command line tool is not installed in most Centos distros. So install it at first.
yum install gitAnd clone Redsocks source code from the git repository.
git clone https://github.com/darkk/redsocksNow change the directory to the redsocks.
cd redsocks/Try to compile the application, but it will probably fail. In my case I don’t have even make installed.
[root@centos7 redsocks]# make -bash: make: command not found [root@centos7 redsocks]#Firstly you need to install build-tools(Development Tools) if you haven’t already.
yum group install “Development Tools”Next, we need to install dependencies to successfully compile Redsocks.
yum install libevent libevent-develAfter installation try to compile Redsocks again using the make command. If compilation succeed you should see the compiled binary file in the current directory.
[root@centos7 redsocks]# ls -l redsocks -rwxr-xr-x 1 root root 415968 Jul 7 08:49 redsocksNow you can copy the binary file to any folder defined in the $PATH variable, to be able to execute it without specifying a full path to Redsocks.
To redirect necessary packets to Redsocks we need to define some iptables rules. I will use the rules suggested on the official Redsocks page.
# Create new chain iptables -t nat -N REDSOCKSThe rule above creates a new custom chain in the NAT table.
Next we need to exclude all local and reserved network addresses. As a result all packets with the destination address from the following ranges will not be sent to Redsocks.
# Exclude local and reserved addresses iptables -t nat -A REDSOCKS -d 0.0.0.0/8 -j RETURN iptables -t nat -A REDSOCKS -d 10.0.0.0/8 -j RETURN iptables -t nat -A REDSOCKS -d 127.0.0.0/8 -j RETURN iptables -t nat -A REDSOCKS -d 169.254.0.0/16 -j RETURN iptables -t nat -A REDSOCKS -d 172.16.0.0/12 -j RETURN iptables -t nat -A REDSOCKS -d 192.168.0.0/16 -j RETURN iptables -t nat -A REDSOCKS -d 224.0.0.0/4 -j RETURN iptables -t nat -A REDSOCKS -d 240.0.0.0/4 -j RETURNNow we need to add a rule that will redirect all packets from our custom REDSOCKS chain to the local port, we will use the default one – 12345
iptables -t nat -A REDSOCKS -p tcp -j REDIRECT –to-ports 12345Please note, this rule doesn’t redirect every packet sent in the system to the port 12345, it only redirects packets that are already passed inside the REDSOCKS chain. You can check this, for example using the following command.
wget google.comConsequently, we need to define a rule that will redirect chosen packets by some criteria to the REDSOCKS chain. You are free to apply any rules you want, but I will show how to redirect all HTTP and HTTPS packets through a proxy. Define the following rules.
# Redirect all HTTP and HTTPS outgoing packets through Redsocks iptables -t nat -A OUTPUT -p tcp –dport 443 -j REDSOCKS iptables -t nat -A OUTPUT -p tcp –dport 80 -j REDSOCKSAlso you may need to define the following rules in the PREROUTING chain. For redirecting incomming packets to the REDSOCKS chain.
iptables -t nat -A PREROUTING -p tcp –dport 443 -j REDSOCKS iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDSOCKS iptables -t nat -A PREROUTING -p tcp –dport 1080 -j REDSOCKSDo not be confused about these rules. They are applied only for incoming packets, that’s why the chain is called PRE ROUTING. Here is the diagram that shows the packet flow through iptables tables and chains.
Furthermore, you are free to define any rules you need, just remember to jump (-j REDSOCKS) to the REDSOCKS chain. Here another example from the official documentation. It redirects only packets sent from a specific user, this could be very useful in some cases. You can follow the user per application strategy (similar used in Android), as a result you will be able set application specific rules.
iptables -t nat -A OUTPUT -p tcp -m owner –uid-owner crosp -j REDSOCKSOne more important point, avoid using the root user in iptables rules or you may get stuck in the infinite loop. Here is a problem I’ve encountered because of my inattention.
Having set iptables rules the final step is to configure Redsocks itself using the configuration file.
Create a file with the name redsocks.conf in the same directory (or any other directory) where the binary file is located. And let’s start by defining the base section of the configuration file as follows.
base { log_debug = on; log_info = on; log = “stderr”; daemon = off; redirector = iptables; }I think each parameter is self-explanatory. You can find a complete description of every option in the example of configuration file in the official repository.
Next we need to define proxies. This is done using redsocks sections. Here is an example of a possible proxy configuration.
redsocks { // Local IP listen to local_ip = 127.0.0.1; // Port to listen to local_port = 12345; // Remote proxy address ip = super.socks.proxy.com; port = 9966; // Proxy type type = socks5; // Username to authorize on proxy server login = anonymous; // Password for a proxy user password = verystrongpassword; // Do not disclose real IP disclose_src = false; }As you can see from the configuration above, this is the SOCKS5 proxy configuration. Redsocks will listen to the port 12345. We have provided credentials to authenticate ourselves on the proxy server as well.
You create multiple redsocks sections. But you have to specify a different local port and as a result you need to set appropriate iptables rules.
redsocks { // Local IP listen to local_ip = 127.0.0.1; // Port to listen to local_port = 83333; // Remote proxy address ip = puper.socks.proxy.com; port = 6699; // Proxy type type = socks5; // Username to authorize on proxy server login = anonymous2; // Password for a proxy user password = verystrongpassword; // Do not disclose real IP disclose_src = false; }As an example, you can use the tricky technique to simulate load balancing with the help of the random module.
iptables -t nat -A REDSOCKS -p tcp -m random –mode random –probability 0.5 -j REDIRECT –to-ports 83333 iptables -t nat -A REDSOCKS -p tcp -j REDIRECT –to-ports 12345There are lot of other ways to define iptables rules for using with multiple proxies, for instance you can use the mangle table, create another chain (ex. REDSOCKS_HTTP) and so on.
Now it is time to test our configuration. Save the configuration file. Navigate to the folder with the Redsocks compiled executable file. And execute the following command.
./redsocks -c /etc/redsocks.confIf your configuration file is valid and there are no other errors you should almost return immediately in case or running in the daemon mode or see the similar output with the daemon mode off.
1499876514.801445 notice main.c:165 main(…) redsocks started, conn_max=128Furthermore, you can use the netstat tool to get list of processes bound to ports.
[root@centos ~]# netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 533/mongod tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 50/sshd tcp 0 0 127.0.0.1:12345 0.0.0.0:* LISTEN 465/./redsocks tcp6 0 0 :::22 :::* LISTEN 50/sshd udp 0 0 0.0.0.0:14332 0.0.0.0:* 229/dhclient udp 0 0 0.0.0.0:68 0.0.0.0:* 229/dhclient udp6 0 0 :::28185 :::* 229/dhclientNow try to make any request as the specified user or to match a rule you defined on your own. For example, use the following curl request to get your external IP address.
curl ‘https://api.ipify.org?format=json’If the redsocks process is attached to the terminal, you should see something like that in stderr.
1499882494.465788 info redsocks.c:1243 redsocks_accept_client(…) [192.168.0.128:34470->50.19.238.1:443]: accepted 1499882495.249026 debug redsocks.c:341 redsocks_start_relay(…) [192.168.0.128:34470->50.19.238.1:443]: data relaying started 1499882496.359465 info redsocks.c:671 redsocks_drop_client(…) [192.168.0.128:34470->50.19.238.1:443]: connection closedIf you encountered any error check debug output in the first place.
In this tutorial I’ve described how to install Redsocks on Centos 7. Redsocks provides a very convenient way to configure a proxy environment, starting from a single proxy server to a dozens of proxies of different type. In addition if you are missing some features in the original project, you can check out a fork – Redsocks2.
I hope this tutorial was useful for you. If you have any troubles with setting up Redsocks, please feel free to leave comments below.
The post How to install and configure Redsocks on Centos Linux appeared first on Oleksandr Molochko.
]]>