Cloud, Data and Integrations https://qappdesign.com/code with Petru Faurescu Wed, 17 Oct 2018 19:44:41 +0000 en-US hourly 1 https://wordpress.org/?v=5.9 118816166 Using MongoDB .NET Driver with .NET Core WebAPI https://qappdesign.com/code/using-mongodb-with-net-core-webapi/ https://qappdesign.com/code/using-mongodb-with-net-core-webapi/#comments Mon, 23 Jul 2018 08:30:47 +0000 http://qappdesign.com/?p=10 What’s about Problem / solution format brings an easier understanding on how to build things, giving an immediate feedback. Starting from this idea, the blog post I will present step by step how to build a web application to store your ideas in an easy way, adding text notes, either from desktop or mobile, with few characteristics: run fast, save on the fly whatever you write, and be reasonably reliable and secure. This article will implement just the backend, WebApi and the database access, in the most simple way. A couple of updates done to the original article Following Peter’s comment, I have simplified the documents returned, see HttpGet requests Following Luciano’s comment, I have extend the update function, making update of the full MongoDB documents at once, not just to some of the properties. There is a new section below, describing this change Trying to read from Angular 2, find the article here, I have ran into CORS problems. An error message was displayed “No ‘Access-Control-Allow-Origin’ header is present on the requested resource”. I have added a new section to describe the solution. I have updated the project to .NET Core 1.1 as well to MongoDB .NET Driver 2.4 […]

The post Using MongoDB .NET Driver with .NET Core WebAPI appeared first on Cloud, Data and Integrations.

]]>
What’s about

Problem / solution format brings an easier understanding on how to build things, giving an immediate feedback. Starting from this idea, the blog post I will present step by step how to build

a web application to store your ideas in an easy way, adding text notes, either from desktop or mobile, with few characteristics: run fast, save on the fly whatever you write, and be reasonably reliable and secure.

This article will implement just the backend, WebApi and the database access, in the most simple way.

A couple of updates done to the original article

  • Following Peter’s comment, I have simplified the documents returned, see HttpGet requests
  • Following Luciano’s comment, I have extend the update function, making update of the full MongoDB documents at once, not just to some of the properties. There is a new section below, describing this change
  • Trying to read from Angular 2, find the article here, I have ran into CORS problems. An error message was displayed “No ‘Access-Control-Allow-Origin’ header is present on the requested resource”. I have added a new section to describe the solution.
  • I have updated the project to .NET Core 1.1 as well to MongoDB .NET Driver 2.4
  • Added a basic level of exception management
  • Following Peter’s comment I have converted the solution to Visual Studio 2017
  • Updated to .NET Core 2.0
  • Following Matthew’s comment, I have updated the interface INoteRepository to not be coupled to MongoDB libraries
  • added a compound MongoDb index
  • Following the comments from Kirk and Andrea, I have added to the structure the MongoDb BSonId and added a section of model binding of JSON Posts
  • Following comments from Manish and Zahn, I have extended the example with a nested class; Updated to MongoDb.Driver 2.7, which add support for new features of the MongoDB 4.0 Server.

The GitHub project is updated and includes all these changes. You could directly download the sources or clone the project locally.

Topics covered

  • Technology stack
  • Configuration model
  • Options model
  • Dependency injection
  • MongoDb – Installation and configuration using MongoDB C# Driver v.2
  • Make a full ASP.NET WebApi project, connected async to MongoDB
  • Allowing Cross Domain Calls (CORS)
  • Update entire MongoDB documents
  • Exception management
  • Model binding of HTTP Post command (newly added)
  • Nested classes in MongoDb

You might be interested also

Technology stack

The ASP.NET Core Web API has the big advantage that it can be used as HTTP service and it can be subscribed by any client application, ranging from desktop to mobiles, and also be installed on Windows, macOS or Linux.

MongoDB is a popular NoSQL database that makes a great backend for Web APIs. These lend themselves more to document store type, rather than to relational databases. This blog will present how to build a .NET Core Web API connected asynchronously to MongoDB, with full support for HTTP GET, PUT, POST, and DELETE.

To install

Here are all the things needed to be installed:

Creating the ASP.NET WebApi project

Launch Visual Studio and then access File > New Project > .Net Core > ASP.NET Core Web Application.

and then

Configuration

There are multiple file formats, supported out of the box for the configuration (JSON, XML, or INI). By default, the WebApi project template comes with JSON format enabled. Inside the setting file, order matters, and include complex structures. Here is an example with a 2 level settings structure for database connection.
AppSettings.json – update the file:

{
  "MongoConnection": {
    "ConnectionString": "mongodb://admin:abc123!@localhost",
    "Database": "NotesDb"
  },

  "Logging": {
    "IncludeScopes": false,
    "Debug": {
      "LogLevel": {
        "Default": "Warning"
      }
    },
    "Console": {
      "LogLevel": {
        "Default": "Warning"
      }
    }
  }
}

Dependency injection and Options model

Constructor injection is one of the most common approach to implementing Dependency Injection (DI), though not the only one. ASP.NET Core uses constructor injection in its solution, so we will also use it. ASP.NET Core project has a Startup.cs file, which configures the environment in which our application will run. The Startup.cs file also places services into ASP.NET Core’s Services layer, which is what enables dependency injection.

To map the custom database connection settings, we will add a new Settings class.

namespace NotebookAppApi.Model
{
    public class Settings
    {
        public string ConnectionString;
        public string Database;
    }
}

Here is how we modify Startup.cs to inject Settings in the Options accessor model:

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc();
    services.Configure<Settings>(options =>
    {
        options.ConnectionString 
			= Configuration.GetSection("MongoConnection:ConnectionString").Value;
        options.Database 
			= Configuration.GetSection("MongoConnection:Database").Value;
    });
}

Further in the project, settings will be access via IOptions interface:

IOptions<Settings>

MongoDB configuration

Once you have installed MongoDB, you would need to configure the access, as well as where the data is located.

To do this, create a file locally, named mongod.cfg. This will include setting path to the data folder for MongoDB server, as well as to the MongoDB log file, initially without any authentication. Please update these local paths, with your own settings:

systemLog:
  destination: file
  path: "C:\\tools\\mongodb\\db\\log\\mongo.log"
  logAppend: true
storage:
  dbPath: "C:\\tools\\mongodb\\db\\data"

Run in command prompt next line. This will start the MongoDB server, pointing to the configuration file already created (in case the server is installed in a custom folder, please update first the command)

"C:\Program Files\MongoDB\Server\3.2\bin\mongod.exe" --config C:\Dev\Data.Config\mongod.cfg

Once the server is started (and you could see the details in the log file), run mongo.exe in command prompt. The next step is to add the administrator user to the database. Run mongodb with the full path (ex: “C:\Program Files\MongoDB\Server\3.2\bin\mongo.exe”).
sketch

and then copy paste the next code in the console:

use admin
db.createUser(
  {
	user: "admin",
	pwd: "abc123!",
	roles: [ { role: "root", db: "admin" } ]
  }
);
exit;

Then stop the server and update the configuration file, including the security option.

systemLog:
  destination: file
  path: "C:\\tools\\mongodb\\db\\log\\mongo.log"
  logAppend: true
storage:
  dbPath: "C:\\tools\\mongodb\\db\\data"
security:
  authorization: enabled

From now on, we’ll connect to MongoDb using admin user. There is a good practice to not use the superuser role (in our case administrator) for normal operations, but in order to keep the things simple, we will continue to have just a single user.

MongoDB .NET Driver

To connect to MongoDB, add via Nuget the package named MongoDB.Driver. This is the new official driver for .NET, fully supporting the ASP.NET Core applications.

Model

The model class (POCO) associated with each entry in the notebook is included below:

using System;
using MongoDB.Bson.Serialization.Attributes;

namespace NotebookAppApi.Model
{
	public class Note
	{
		[BsonId]
		// standard BSonId generated by MongoDb
		public ObjectId InternalId { get; set; }

		// external Id, easier to reference: 1,2,3 or A, B, C etc.
		public string Id { get; set; }                          

		public string Body { get; set; } = string.Empty;

		[BsonDateTimeOptions]
                // attribute to gain control on datetime serialization
		public DateTime UpdatedOn { get; set; } = DateTime.Now;

		public NoteImage HeaderImage { get; set; }

		public int UserId { get; set; } = 0;
	}
}

Note: By default, using the parameter BsonDateTimeOptions, Bson serializer tries to serialize as a DateTime, as UTC. Adding the attribute as follows, we allow saving in local time instead: [BsonDateTimeOptions(Kind = DateTimeKind.Local)]

Assuming the Note would have a header image, here would be a sample embedded class:

public class NoteImage
{
	public string Url { get; set; } = string.Empty;
	public string ThumbnailUrl { get; set; } = string.Empty;
	public long ImageSize { get; set; } = 0L;
}

Defining the database context

In order to keep the functions for accessing the database in a distinct place, we will add a NoteContext class. This will use the Settings defined above.

public class NoteContext
{
    private readonly IMongoDatabase _database = null;

    public NoteContext(IOptions<Settings> settings)
    {
        var client = new MongoClient(settings.Value.ConnectionString);
        if (client != null)
            _database = client.GetDatabase(settings.Value.Database);
    }

    public IMongoCollection<Note> Notes
    {
        get
        {
            return _database.GetCollection<Note>("Note");
        }
    }
}

Adding the repository

Using a repository interface, we will implement the functions needed to manage the Notes. These will also use Dependency Injection (DI) to be easily access from the application (e.g. controller section):

public interface INoteRepository
{
	Task<IEnumerable<Note>> GetAllNotes();
	Task<Note> GetNote(string id);

	// query after multiple parameters
	Task<IEnumerable<Note>> GetNote(string bodyText, DateTime updatedFrom, long headerSizeLimit);

	// add new note document
	Task AddNote(Note item);

	// remove a single document / note
	Task<bool> RemoveNote(string id);

	// update just a single document / note
	Task<bool> UpdateNote(string id, string body);

	// demo interface - full document update
	Task<bool> UpdateNoteDocument(string id, string body);

	// should be used with high cautious, only in relation with demo setup
	Task<bool> RemoveAllNotes();
}

The access to database will be asynchronous. We are using here the new driver, which offers a full async stack.

Just as an example: to get all the Notes, we make an async request:

public async Task<IEnumerable<Note>> GetAllNotes()
{
    var documents = await _context.Notes.Find(_ => true).ToListAsync();
    return documents;
}

Here is the full implementation, for all basic CRUD operations:

public class NoteRepository : INoteRepository
{
	private readonly NoteContext _context = null;

	public NoteRepository(IOptions<Settings> settings)
	{
		_context = new NoteContext(settings);
	}

	public async Task<IEnumerable<Note>> GetAllNotes()
	{
		try
		{
			return await _context.Notes
					.Find(_ => true).ToListAsync();
		}
		catch (Exception ex)
		{
			// log or manage the exception
			throw ex;
		}
	}

	// query after Id or InternalId (BSonId value)
	//
	public async Task<Note> GetNote(string id)
	{
		try
		{
			ObjectId internalId = GetInternalId(id);
			return await _context.Notes
							.Find(note => note.Id == id 
									|| note.InternalId == internalId)
							.FirstOrDefaultAsync();
		}
		catch (Exception ex)
		{
			// log or manage the exception
			throw ex;
		}
	}

	// query after body text, updated time, and header image size
	//
	public async Task<IEnumerable<Note>> GetNote(string bodyText, DateTime updatedFrom, long headerSizeLimit)
	{
		try
		{
			var query = _context.Notes.Find(note => note.Body.Contains(bodyText) &&
								   note.UpdatedOn >= updatedFrom &&
								   note.HeaderImage.ImageSize <= headerSizeLimit);

			return await query.ToListAsync();
		}
		catch (Exception ex)
		{
			// log or manage the exception
			throw ex;
		}
	}

	private ObjectId GetInternalId(string id)
	{
		ObjectId internalId;
		if (!ObjectId.TryParse(id, out internalId))
			internalId = ObjectId.Empty;

		return internalId;
	}
	
	public async Task AddNote(Note item)
	{
		try
		{
			await _context.Notes.InsertOneAsync(item);
		}
		catch (Exception ex)
		{
			// log or manage the exception
			throw ex;
		}
	}

	public async Task<bool> RemoveNote(string id)
	{
		try
		{
			DeleteResult actionResult 
				= await _context.Notes.DeleteOneAsync(
					Builders<Note>.Filter.Eq("Id", id));

			return actionResult.IsAcknowledged 
				&& actionResult.DeletedCount > 0;
		}
		catch (Exception ex)
		{
			// log or manage the exception
			throw ex;
		}
	}

	public async Task<bool> UpdateNote(string id, string body)
	{
		var filter = Builders<Note>.Filter.Eq(s => s.Id, id);
		var update = Builders<Note>.Update
						.Set(s => s.Body, body)
						.CurrentDate(s => s.UpdatedOn);

		try
		{
			UpdateResult actionResult 
				= await _context.Notes.UpdateOneAsync(filter, update);

			return actionResult.IsAcknowledged
				&& actionResult.ModifiedCount > 0;
		}
		catch (Exception ex)
		{
			// log or manage the exception
			throw ex;
		}
	}

	public async Task<bool> UpdateNote(string id, Note item)
	{
		try
		{
			ReplaceOneResult actionResult 
				= await _context.Notes
								.ReplaceOneAsync(n => n.Id.Equals(id)
										, item
										, new UpdateOptions { IsUpsert = true });
			return actionResult.IsAcknowledged
				&& actionResult.ModifiedCount > 0;
		}
		catch (Exception ex)
		{
			// log or manage the exception
			throw ex;
		}
	}

	// Demo function - full document update
	public async Task<bool> UpdateNoteDocument(string id, string body)
	{
		var item = await GetNote(id) ?? new Note();
		item.Body = body;
		item.UpdatedOn = DateTime.Now;

		return await UpdateNote(id, item);
	}

	public async Task<bool> RemoveAllNotes()
	{
		try
		{
			DeleteResult actionResult 
				= await _context.Notes.DeleteManyAsync(new BsonDocument());

			return actionResult.IsAcknowledged
				&& actionResult.DeletedCount > 0;
		}
		catch (Exception ex)
		{
			// log or manage the exception
			throw ex;
		}
	}
}

In order to access NoteRepository using DI model, we add a new line in ConfigureServices

services.AddTransient<INoteRepository, NoteRepository>();

where:

  • Transient: Created each time.
  • Scoped: Created only once per request.
  • Singleton: Created the first time they are requested. Each subsequent request uses the instance that was created the first time.

Adding the main controller

First we present the main controller. It provides all the CRUD interfaces, available to external applications.
The Get actions have NoCache directive, to ensure web clients make always requests to the server.

[Produces("application/json")]
[Route("api/[controller]")]
public class NotesController : Controller
{
	private readonly INoteRepository _noteRepository;

	public NotesController(INoteRepository noteRepository)
	{
		_noteRepository = noteRepository;
	}

	[NoCache]
	[HttpGet]
	public async Task<IEnumerable<Note>> Get()
	{
		return await _noteRepository.GetAllNotes();
	}

	// GET api/notes/5 - retrieves a specific note using either Id or InternalId (BSonId)
	[HttpGet("{id}")]
	public async Task<Note> Get(string id)
	{
		return await _noteRepository.GetNote(id) ?? new Note();
	}

	// GET api/notes/text/date/size
	// ex: http://localhost:53617/api/notes/Test/2018-01-01/10000
	[NoCache]
	[HttpGet(template: "{bodyText}/{updatedFrom}/{headerSizeLimit}")]
	public async Task<IEnumerable<Note>> Get(string bodyText, 
											 DateTime updatedFrom, 
											 long headerSizeLimit)
	{
		return await _noteRepository.GetNote(bodyText, updatedFrom, headerSizeLimit) 
					?? new List<Note>();
	}

	// POST api/notes - creates a new note
	[HttpPost]
	public void Post([FromBody] NoteParam newNote)
	{
		_noteRepository.AddNote(new Note
									{
										Id = newNote.Id,
										Body = newNote.Body,
										CreatedOn = DateTime.Now,
										UpdatedOn = DateTime.Now,
										UserId = newNote.UserId
									});
	}

	// PUT api/notes/5 - updates a specific note
	[HttpPut("{id}")]
	public void Put(string id, [FromBody]string value)
	{
		_noteRepository.UpdateNoteDocument(id, value);
	}

	// DELETE api/notes/5 - deletes a specific note
	[HttpDelete("{id}")]
	public void Delete(string id)
	{
		_noteRepository.RemoveNote(id);
	}
}

Adding the admin controller

This will be a controller dedicated to administrative tasks (we use to initialize the database with some dummy data). In real projects, we should very cautiously use such interface. For development only and quick testing purpose, this approach may be convenient.

To use it, we will just add the url in the browser. Running the code below, the full setup will be automatically created (e.g. new database, new collection, sample records). We can use either http://localhost:5000/api/system/init (when using IIS) or http://localhost:53617/api/system/init (when using IIS Express, enabled as default on this sample project). We could even extend the idea, adding more commands. However, as mentioned above, these kind of scenarios should be used just for development, and be never deployed to a production environment.

[Route("api/[controller]")]
public class SystemController : Controller
{
	private readonly INoteRepository _noteRepository;

	public SystemController(INoteRepository noteRepository)
	{
		_noteRepository = noteRepository;
	}

	// Call an initialization - api/system/init
	[HttpGet("{setting}")]
	public string Get(string setting)
	{
		if (setting == "init")
		{
			_noteRepository.RemoveAllNotes();
			var name = _noteRepository.CreateIndex();

			_noteRepository.AddNote(new Note()
			{
				Id = "1",
				Body = "Test note 1",
				UpdatedOn = DateTime.Now,
				UserId = 1,
				HeaderImage = new NoteImage
				{
					ImageSize = 10,
					Url = "http://localhost/image1.png",
					ThumbnailUrl = "http://localhost/image1_small.png"
				}
			});

			_noteRepository.AddNote(new Note()
			{
				Id = "2",
				Body = "Test note 2",
				UpdatedOn = DateTime.Now,
				UserId = 1,
				HeaderImage = new NoteImage
				{
					ImageSize = 13,
					Url = "http://localhost/image2.png",
					ThumbnailUrl = "http://localhost/image2_small.png"
				}
			});

			_noteRepository.AddNote(new Note()
			{
				Id = "3",
				Body = "Test note 3",
				UpdatedOn = DateTime.Now,
				UserId = 1,
				HeaderImage = new NoteImage
				{
					ImageSize = 14,
					Url = "http://localhost/image3.png",
					ThumbnailUrl = "http://localhost/image3_small.png"
				}
			});

			_noteRepository.AddNote(new Note()
			{
				Id = "4",
				Body = "Test note 4",
				UpdatedOn = DateTime.Now,
				UserId = 1,
				HeaderImage = new NoteImage
				{
					ImageSize = 15,
					Url = "http://localhost/image4.png",
					ThumbnailUrl = "http://localhost/image4_small.png"
				}
			});

			return "Database NotesDb was created, and collection 'Notes' was filled with 4 sample items";
		}

		return "Unknown";
	}
}

Launch settings

In order to have a quick display of the values, once the project will run, please update the file launchSettings.json.

sketch

Here is the full file content, pointing by default to api/notes url.

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:53617/",
      "sslPort": 0
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "api/notes",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "NotebookAppApi": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "http://localhost:5000/api/notes",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

Running the project

Before running the project, please make sure the MongoDB is running (either as an Windows Service, or via console application, as presented above).

Run first the initialization link:
http://localhost:53617/api/system/init

and then run the default application link
http://localhost:53617/api/notes

Use Robo 3T

Using Robo 3T we could check the actual entries inside the database. Connecting to the database, using the credentials, we could see all records.

Even if the unique id has the name _id, the MongoDb .NET Driver maps it to our variable InternalId using the tag [BsonId].

Running project on GitHub

Full source for this example is available on GitHub -> https://github.com/fpetru/WebApiMongoDB

Allowing Cross Domain Calls (CORS)

Being different applications, running on separate domains, all calls back to ASP.NET WebAPI site are effectively cross domain calls. With Angular 2, there is first a pre-flight request, before the actual request, (an OPTIONS request). Doing this pre-check, we verify first that cross domain calls are allowed (CORS).

I have enabled CORS by applying two changes:

  • First register CORS functionality in ConfigureServices() of Startup.cs:
  •  public void ConfigureServices(IServiceCollection services) 
     {
          // Add service and create Policy with options 
          services.AddCors(options => { options.AddPolicy("CorsPolicy", 
                                          builder => builder.AllowAnyOrigin() 
                                                            .AllowAnyMethod() 
                                                            .AllowAnyHeader() 
                                                            .AllowCredentials()); 
                                      }); 
          // .... 
    
          services.AddMvc(); 
     }
    
  • and then enable the policy globally to every request in the application by calling app.useCors() in the Configure()method of Startup, before UseMVC.
  •  public void Configure(IApplicationBuilder app) 
     { 
        // ... 
    
        // global policy, if assigned here (it could be defined individually for each controller) 
        app.UseCors("CorsPolicy"); 
    
        // ... 
    
        // We define UseCors() BEFORE UseMvc, below just a partial call
        app.UseMvc(routes => {
     }
    

Even if this could be further and more selective applied, the rest of the article remains unchanged.

Fully update the MongoDB documents

Initially the sample project included only selective update of the properties. Using ReplaceOneAsync we could update the full document. Upsert creates the document, in case it doesn’t already exist.

public async Task<ReplaceOneResult> UpdateNote(string id, Note item)
{
     return await _context.Notes
                          .ReplaceOneAsync(n => n.Id.Equals(id)
                                            , item
                                            , new UpdateOptions { IsUpsert = true });
} 

Test the update

To be able to test the update, I have used Postman. It is an excellent tool to test APIs.

I have selected the command type POST, then entered the local URL, and added a new Header (Content-Type as application/json).

ASP.NET Core WebAPI Set-header

And then set the Body as raw and updated a dummy value.
ASP.NET Core WebAPI Make the request

Using RoboMongo we can see the value updated.
MongoDB .NET Driver Updated document in Robomongo

Exception management

Starting with C# 5.0 async and await were introduced into the language to simplify using the Task Parallel Library. We can simply use a try/catch block to catch an exception, like so:

public async Task<IEnumerable<Note>> GetAllNotes()
{
    try
    {
        return await _context.Notes.Find(_ => true).ToListAsync();
    }
    catch (Exception ex)
    {
        // log or manage the exception
        throw ex;
    }
}

In this way we handle a faulted task by asynchronously wait for it to complete, using await. This will rethrow the original stored exception.

Initially I have used void as return. Changing the return type, the exception raised in the async method will get safely saved in the returning Task instance. When we await the faulty method, the exception saved in the Task will get rethrown with its full stack trace preserved.

public async Task AddNote(Note item)
{
    try
    {
        await _context.Notes.InsertOneAsync(item);
    }
    catch (Exception ex)
    {
        // log or manage the exception
        throw ex;
    }
}

Model binding of JSON POSTs in .NET Core

Model binding is the conversion of the raw HTTP request into the arguments for an action method invocation on a controller.
[FromBody] parameter tells the .net core framework to use the content-type header of the request, to decide which of the configured IInputFormatters to use for model binding.

By default, when you call AddMvc() in Startup.cs, a JSON formatte (JsonInputFormatter) is automatically configured. You could add additional formatters if you need to, for example to bind XML to an object.

[HttpPost]
public void Post([FromBody] NoteParam newNote)

To add a new Note, we need first to set Content-Type, to be application/json.

Then we send a JSON object, and we successfully add a new Note. Since UserId is not set, the object will take the default value.

Query on Embedded / Nested Documents

CSharp driver of MongoDB makes the query on the embedded documents easy. In the example below we mix two filters, one comparing the date from the main document, and one comparing a long member of the nested class.

note.UpdatedOn >= updatedFrom && note.HeaderImage.ImageSize <= headerSizeLimit

Accessing the application using IIS Express, we could use the Get function that contain all the notes with Test, created after 2018-01-01 and size smaller than 10000. Once the project is started, this function could be called by using the next URL in the browser: http://localhost:53617/api/notes/Test/2018-01-01/10000.

At the end

Hope this helped ! Let me know if you have questions or some things needs to be updated.

The post Using MongoDB .NET Driver with .NET Core WebAPI appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/using-mongodb-with-net-core-webapi/feed/ 94 10
MongoDb and LINQ: How to aggregate and join collections https://qappdesign.com/code/mongodb-and-linq-how-to-aggregate-and-join-collections/ https://qappdesign.com/code/mongodb-and-linq-how-to-aggregate-and-join-collections/#comments Tue, 14 Nov 2017 18:00:43 +0000 https://qappdesign.com/code/?p=766 Data aggregations are very helpful whenever you need to create metrics or get more insights from the data. Furthermore, joining multiple MongoDb collections may provide more meaningful results. This article will be a light intro on how to do run these on MongoDb using .NET Driver and LINQ. Notes before starting This article is the 3rd article, continuing Part 1: How to search good places to travel (MongoDb LINQ & .NET Core), and Part 2: Paging in MongoDB – How to actually avoid poor performance ?. All share the same GitHub project, each of them having specific code methods. Please follow the steps presented in Part 1 on how to install and configure MongoDb, as well as, the section about the initial data upload. To install Here are all the things needed to be installed: Visual Studio Community 2017, including .NET Core option MongoDB and Robomongo Run project In brief, once the MongoDb installation is complete, run the next steps: Clone or download the project (https://github.com/fpetru/WebApiQueryMongoDb) Run the import.cmd from Data\Import folder Open solution in Visual Studio, compile and run GroupBy in MongoDb MongoDb has for a long time an aggregation framework and with the .NET Driver, and its features […]

The post MongoDb and LINQ: How to aggregate and join collections appeared first on Cloud, Data and Integrations.

]]>
Data aggregations are very helpful whenever you need to create metrics or get more insights from the data. Furthermore, joining multiple MongoDb collections may provide more meaningful results. This article will be a light intro on how to do run these on MongoDb using .NET Driver and LINQ.

Notes before starting

This article is the 3rd article, continuing Part 1: How to search good places to travel (MongoDb LINQ & .NET Core), and Part 2: Paging in MongoDB – How to actually avoid poor performance ?. All share the same GitHub project, each of them having specific code methods. Please follow the steps presented in Part 1 on how to install and configure MongoDb, as well as, the section about the initial data upload.

To install

Here are all the things needed to be installed:

Run project

In brief, once the MongoDb installation is complete, run the next steps:

GroupBy in MongoDb

MongoDb has for a long time an aggregation framework and with the .NET Driver, and its features fit nice into standard LINQ operators (such as: $project => Select(), $limit => Take(), $match => Where() etc.). LINQ is ideally suited to building up a pipeline of operations and to submit to the server as a single command.

In our example, grouping by City, and finding all available travel items would look like this:

public async Task<IEnumerable<object>> GetTravelDestinations(string cityName)
{
    var groupTravelItemsByCity = _context.TravelItems.AsQueryable()
                .Where(city => string.IsNullOrEmpty(cityName) 
						|| city.City.Contains(cityName))
                .GroupBy(s => new { s.City })
                .Select(n => new
                {
                    value = n.Key.City,
                    data = n.Count()
                });

    return await groupTravelItemsByCity.Take(100).ToListAsync();
}

The results are made available to external applications using the Get function from controller:

// GET api/Display/GroupBy?city=CityName
[NoCache]
[HttpGet("{type}")]
public async Task<IActionResult> Get(string type, [FromQuery]string city)
{
	if (!string.IsNullOrEmpty(city) && city.Length > 1) 
		return Ok(await _displayRepository.GetTravelDestinations(city));

	return NotFound();
}

I have used IActionResult interface to be able to return 404 in case the request does not follow the requirements: city needs to be provided, with a minimum lenght of 2 characters.

More about aggregation in MongoDb

All standard LINQ to SQL aggregate operators are supported: Average, Count, Max, Min, and Sum. We could also group by using more attributes. Here is an example, grouping first after City and then after each associated Action, and also using the aggregate functions (like Count, Max and Min):

public async Task<IEnumerable<object>> GetTravelItemStat()
{
	var groupTravelItemsByCityAndAction = _context.TravelItems.AsQueryable()
				.Where(s => s.City == "Paris" || s.City == "Berlin")
				.GroupBy(s => new { s.City, s.Action })
				.Select(n => new
				{
					Location = n.Key,
					Count = n.Count(),
					MaxPrice = n.Max(p => p.Price),
					MinPrice = n.Min(p => p.Price)
				});

	return await groupTravelItemsByCityAndAction.Take(100).ToListAsync();
}

Join support from MongoDb

Here is an example of running a join between 2 collections, using the LINQ as a query expression. It is a LEFT join query, starting with the first (left-most) collection (TravelItems) and then matching second (right-most) collection (CityExtended).

This means that it filters resultant items (CityExtended). The overall result could be projected in an anonymous type (our example below), or in a new entity:

public async Task<IEnumerable<object>> GetTravelItemsOfCityAsync(string cityName)
{
	var query = from travelItem in _context.TravelItems.AsQueryable()
				join city in _context.CityExtended.AsQueryable()
				   on travelItem.City equals city.Name
				into CityExtendedMatchingItems
				where (travelItem.City == cityName)
				select new
				{
					Action = travelItem.Action,
					Name = travelItem.Name,
					FirstCityMatched = CityExtendedMatchingItems.First(),
				};

	return await query.Take(10).ToListAsync();
}

Access the WebApi using Javascript

Accessing the webapi from a simple static HTML with javascript, could look like this:

In order to make available a html file within the project, we would need first to enable the access to the static files (e.g. html, css, images). These are typically located in the web root (/wwwroot) folder. For development, we could set this as project’s web root – see method (UseContentRoot):

public static IWebHost BuildWebHost(string[] args) =>
	WebHost.CreateDefaultBuilder(args)
		.UseContentRoot(Directory.GetCurrentDirectory())
		.UseStartup<Startup>()
		.Build();

In order for static files to be served, we also need to configure the Middleware to add static files to the pipeline.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // ...
    app.UseStaticFiles();
    // ... 
}

Once the static files are enabled, we will use jQuery library to access the WebApi, and also to display the results (with the autocomplete widget). This code was originally available from: https://designshack.net/articles/javascript/create-a-simple-autocomplete-with-html5-jquery).

Here is the full javascript code for both autocomplete function, and WebApi server calling.

<script type="text/javascript">
    $(document).ready(function () {
        $('#autocomplete').autocomplete({
            minLength: 2,
            source: function (request, response) {
                var webApiUrl = './api/display/GroupBy' + '?city=' + request.term;
                $.getJSON(webApiUrl, request, function (data, status, xhr) {
                    response(data);
                });
            },
        });
    });
</script>

You might be interested also

The post MongoDb and LINQ: How to aggregate and join collections appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/mongodb-and-linq-how-to-aggregate-and-join-collections/feed/ 7 766
Paging in MongoDB – How to actually avoid poor performance ? https://qappdesign.com/code/paging-mongodb-avoid-poor-performance/ https://qappdesign.com/code/paging-mongodb-avoid-poor-performance/#comments Wed, 16 Aug 2017 20:16:34 +0000 https://qappdesign.com/code/?p=646 What is the best way (performance wise) to paginate results in MongoDB ? Especially when you also want to get the total number of results ? Project running with .NET Core 2.0 Where to start ? For answering to these questions, let’s start from the datasets defined in my earlier article Part 1: How to search good places to travel (MongoDb LINQ & .NET Core). That article was quick introduction, on how to load big chunks of data and then retrieve values using WebApi and LINQ. Here, I will start from that project, extending it with more details related to paging the query results. You could also check Part 3 – MongoDb and LINQ: How to aggregate and join collections You can find the full solution, together with the data here: https://github.com/fpetru/WebApiQueryMongoDb Topics covered Paging query results with skip and limit Paging query results using last position MongoDb BSonId Paging using MongoDb .NET Driver To install Here are all the things needed to be installed: Visual Studio Community 2017, including .NET Core option MongoDB and Robomongo See the results Here are few steps to have the solution ready, and see the results immediately: Clone or download the project Run import.bat […]

The post Paging in MongoDB – How to actually avoid poor performance ? appeared first on Cloud, Data and Integrations.

]]>
What is the best way (performance wise) to paginate results in MongoDB ? Especially when you also want to get the total number of results ?
Project running with .NET Core 2.0

Where to start ?

For answering to these questions, let’s start from the datasets defined in my earlier article Part 1: How to search good places to travel (MongoDb LINQ & .NET Core). That article was quick introduction, on how to load big chunks of data and then retrieve values using WebApi and LINQ. Here, I will start from that project, extending it with more details related to paging the query results. You could also check Part 3 – MongoDb and LINQ: How to aggregate and join collections

You can find the full solution, together with the data here: https://github.com/fpetru/WebApiQueryMongoDb

Topics covered

  • Paging query results with skip and limit
  • Paging query results using last position
  • MongoDb BSonId
  • Paging using MongoDb .NET Driver

To install

Here are all the things needed to be installed:

See the results

Here are few steps to have the solution ready, and see the results immediately:

  1. Clone or download the project
  2. Run import.bat file from Data folder – this will create the database (TravelDb), and fill in two datasets
  3. Open solution with Visual Studio 2017 and check the connection settings appsettings.json
  4. Run the solution

If you have any issues on installing MongoDb, setting up the databases, or project structure, please review my earlier article.

Paging results using cursor.skip() and cursor.limit()

If you do a Google search, this is usually the first presented method to make pagination of the query results in MongoDB. It is a straightforward method, but also expensive in terms of performance. It requires the server to walk from the beginning of the collection or index each time, to get the offset or skip position, before actually begin to return the result you need.

For example:

db.Cities.find().skip(5200).limit(10);

The server will need to parse the first 5200 items in WikiVoyage collection, and then return the next 10. This doesn’t scale well due to skip() command.

Paging using the last position

To be faster, we should search and retrieve the details starting from the last retrieved item. As an example, let’s assume we need to find all the cities in France, with a population greater than 15.000 inhabitants.

Following this method, the initial request to retrieve first 200 records would be:

LINQ Format
We first retrieve AsQueryable interface:

var _client = new MongoClient(settings.Value.ConnectionString);
var _database = _client.GetDatabase(settings.Value.Database);
var _context = _database.GetCollection<City>("Cities").AsQueryable<City>();	

and then we run the actual query:

query = _context.CitiesLinq
                .Where(x => x.CountryCode == "FR"
                            && x.Population >= 15000)
                .OrderByDescending(x => x.Id)
                .Take(200);
				
List<City> cityList = await query.ToListAsync();

The subsequent queries would start from the last retrieved Id. Ordering by BSonId we retrieve the most recent records created on the server before the last Id.

query = _context.CitiesLinq
                .Where(x => x.CountryCode == "FR"
                         && x.Population >= 15000
                         && x.Id < ObjectId.Parse("58fc8ae631a8a6f8d000f9c3"))
                .OrderByDescending(x => x.Id)
                .Take(200);
List<City> cityList = await query.ToListAsync();

Mongo’s ID

In MongoDB, each document stored in a collection requires a unique _id field that acts as a primary key. It is immutable, and may be of any type other than an array (by default a MongoDb ObjectId, a natural unique identifier, if available; or just an auto-incrementing number).

Using default ObjectId type,

[BsonId]
public ObjectId Id { get; set; }

it brings more advantages, such as having available the date and timestamp when the record has been added to the database. Furthermore, sorting by ObjectId will return last added entities to the MongoDb collection.

cityList.Select(x => new
					{
						BSonId = x.Id.ToString(), // unique hexadecimal number
						Timestamp = x.Id.Timestamp,
						ServerUpdatedOn = x.Id.CreationTime
						/* include other members */
					});

Returning fewer elements

While the class City has 20 members, it would be relevant to return just the properties we actually need. This would reduce the amount of data transferred from the server.

cityList.Select(x => new
					{
						BSonId = x.Id.ToString(), // unique hexadecimal number
						Name,
						AlternateNames,
						Latitude,
						Longitude,
						Timezone,
						ServerUpdatedOn = x.Id.CreationTime
					});

Indexes in MongoDB – few details

We would rarely need to get data, in exact order of the MongoDB internal ids (_id)I, without any filters (just using find()). In most of the cases, we would retrieve data using filters, and then sorting the results. For queries that include a sort operation without an index, the server must load all the documents in memory to perform the sort before returning any results.

How do we add an index ?
Using RoboMongo, we create the index directly on the server:

db.Cities.createIndex( { CountryCode: 1, Population: 1 } );

How do we check our query is actual using the index ?
Running a query using explain command would return details on index usage:

db.Cities.find({ CountryCode: "FR", Population : { $gt: 15000 }}).explain();

Is there a way to see the actual query behind the MongoDB LINQ statement ?
The only way I could find this, it was via GetExecutionModel() method. This provides detailed information, but inside elements are not easy accessible.

query.GetExecutionModel();

Using the debugger, we could see the elements as well as the full actual query sent to MongoDb.

Then, we could get the query and execute it against MongoDb using RoboMongo tool, and see the details of the execution plan.

Non LINQ way – Using MongoDb .NET Driver

LINQ is slightly slower than using the direct API, as it adds abstraction to the query. This abstraction would allow you to easily change MongoDB for another data source (MS SQL Server / Oracle / MySQL etc.) without many code changes, and this abstraction brings a slight performance hit.

Even so, newer version of the MongoDB .NET Driver has simplified a lot the way we filter and run queries. The fluent interface (IFindFluent) brings very much with LINQ way of writing code.

var filterBuilder = Builders<City>.Filter;
var filter = filterBuilder.Eq(x => x.CountryCode, "FR")
				& filterBuilder.Gte(x => x.Population, 10000)
				& filterBuilder.Lte(x => x.Id, ObjectId.Parse("58fc8ae631a8a6f8d000f9c3"));

return await _context.Cities.Find(filter)
							.SortByDescending(p => p.Id)
							.Limit(200)
							.ToListAsync();

where _context is defined as

var _context = _database.GetCollection<City>("Cities");	

Implementation

Wrapping up, here is my proposal for the paginate function. OR predicates are supported by MongoDb, but it is usually hard for the query optimizer to predict the disjoint sets from the two sides of the OR. Trying to avoid them whenever is possible is a known trick for query optimization.

// building where clause
//
private Expression<Func<City, bool>> GetConditions(string countryCode, 
												   string lastBsonId, 
												   int minPopulation = 0)
{
    Expression<Func<City, bool>> conditions 
						= (x => x.CountryCode == countryCode
                               && x.Population >= minPopulation);

    ObjectId id;
    if (string.IsNullOrEmpty(lastBsonId) && ObjectId.TryParse(lastBsonId, out id))
    {
        conditions = (x => x.CountryCode == countryCode
                        && x.Population >= minPopulation
                        && x.Id < id);
    }

    return conditions;

}

public async Task<object> GetCitiesLinq(string countryCode, 
										string lastBsonId, 
										int minPopulation = 0)
{

    try
    {
        var items = await _context.CitiesLinq
                            .Where(GetConditions(countryCode, lastBsonId, minPopulation))
                            .OrderByDescending(x => x.Id)
                            .Take(200)
                            .ToListAsync();

        // select just few elements
        var returnItems = items.Select(x => new
                            {
                                BsonId = x.Id.ToString(),
                                Timestamp = x.Id.Timestamp,
                                ServerUpdatedOn = x.Id.CreationTime,
                                x.Name,
                                x.CountryCode,
                                x.Population
                            });

        int countItems = await _context.CitiesLinq
                            .Where(GetConditions(countryCode, "", minPopulation))
                            .CountAsync();


        return new
            {
                count = countItems,
                items = returnItems
            };
    }
    catch (Exception ex)
    {
        // log or manage the exception
        throw ex;
    }
}

and in the controller

[NoCache]
[HttpGet]
public async Task<object> Get(string countryCode, int? population, string lastId)
{
	return await _travelItemRepository
					.GetCitiesLinq(countryCode, lastId, population ?? 0);
}

The initial request (sample):

http://localhost:61612/api/city?countryCode=FR&population=10000

followed by other requests where we specify the last retrieved Id:

http://localhost:61612/api/city?countryCode=FR&population=10000&lastId=58fc8ae631a8a6f8d00101f9

Here is just a sample:

At the end

I hope this helps, and please let me know if you need to be extended or have questions.

The post Paging in MongoDB – How to actually avoid poor performance ? appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/paging-mongodb-avoid-poor-performance/feed/ 8 646
How to search good places to travel (MongoDb LINQ & .NET Core) https://qappdesign.com/code/search-best-places-mongodb-linq-netcore/ https://qappdesign.com/code/search-best-places-mongodb-linq-netcore/#comments Mon, 24 Apr 2017 08:00:49 +0000 https://qappdesign.com/code/?p=579 Let’s build a simple WebApi with .NET Core and MongoDb to query the details of different destinations around the globe. We’ll do the search with MongoDb LINQ, running different scenarios. For a brief introduction on how to build and test a full .NET CORE WebApi with MongoDB please check my earlier article: Using MongoDB .NET Driver with .NET Core WebAPI. This article continues with 2 other parts: Part 2 – Paging in MongoDB – How to actually avoid poor performance ? Part 3 – MongoDb and LINQ: How to aggregate and join collections You could find the project on GitHub: github.com/fpetru/WebApiQueryMongoDb Within this article I will use two datasets: Wikivoyage provides more details of the most traveler friendly museums, attractions, restaurants and hotels around the globe. The original dataset could be access from the next url. The second dataset comes from GeoNames, which is a geographical database covering all countries. For demo purposes, I have selected only the cities with a population over 5000 inhabitants. Using these datasets it would be easier to run some sample queries, retrieving consistent amount of data. Topics covered MongoDb – Installation and security setup MongoDB – use mongoimport tool Make a full ASP.NET WebApi […]

The post How to search good places to travel (MongoDb LINQ & .NET Core) appeared first on Cloud, Data and Integrations.

]]>
Let’s build a simple WebApi with .NET Core and MongoDb to query the details of different destinations around the globe. We’ll do the search with MongoDb LINQ, running different scenarios.

For a brief introduction on how to build and test a full .NET CORE WebApi with MongoDB please check my earlier article: Using MongoDB .NET Driver with .NET Core WebAPI.

This article continues with 2 other parts:

You could find the project on GitHub: github.com/fpetru/WebApiQueryMongoDb

Within this article I will use two datasets:

  • Wikivoyage provides more details of the most traveler friendly museums, attractions, restaurants and hotels around the globe. The original dataset could be access from the next url.
  • The second dataset comes from GeoNames, which is a geographical database covering all countries. For demo purposes, I have selected only the cities with a population over 5000 inhabitants.

Using these datasets it would be easier to run some sample queries, retrieving consistent amount of data.

Topics covered

  • MongoDb – Installation and security setup
  • MongoDB – use mongoimport tool
  • Make a full ASP.NET WebApi project, connected async using MongoDB C# Driver v.2
  • Run LINQ queries

To install

Here are all the things needed to be installed:

MongoDB configuration

Once you have installed MongoDB, you would need to configure the access, as well as where the data is located.

To do this, create a file locally, named mongod.cfg. This will include setting path to the data folder for MongoDB server, as well as to the MongoDB log file, initially without any authentication (last 2 lines being commented). Please update these local paths, with your local settings:

systemLog:
  destination: file
  path: "C:\\tools\\mongodb\\db\\log\\mongo.log"
  logAppend: true
storage:
  dbPath: "C:\\tools\\mongodb\\db\\data"

#Once the admin user is created, remove the comments, and let the authorization be enabled
#security:
#  authorization: enabled

Run in command prompt next line. This will start the MongoDB server, pointing to the configuration file already created (in case the server is installed in a custom folder, please update first the command)

"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --config C:\Dev\Data.Config\mongod.cfg

Once the server is started (and you could see the details in the log file), run mongo.exe in command prompt. The next step is to add the administrator user to the database. Run MongoDB with the full path (ex: “C:\Program Files\MongoDB\Server\3.4\bin\mongo.exe”) and copy paste the next code in the console:

use admin
db.createUser(
  {
	user: "admin",
	pwd: "abc123!",
	roles: [ { role: "root", db: "admin" } ]
  }
);
exit;

Stop the server, uncomment the last 2 lines from mongod.cfg file and then restart the MongoDb server.

MongoImport – Initialize the database with large datasets

We will start with Wikivoyage. The dataset was originally available here (link). To be easier to import it, I have slightly transformed it (changed to tab delimited file, and applied a minimum data cleaning). The file is available in Github (link).

The second dataset GeoNames is available in the same Github folder (link).

Running the script import.bat (found in the same folder as the datsets), will do the import of the data, creating also the a new database, called TravelDb and the associated indexes. Script is included here, but it would be better just to run the script file:

mongoimport --db TravelDb ^
            --collection WikiVoyage ^
            --type tsv ^
            --fieldFile enwikivoyage-fields.txt^
            --file enwikivoyage-20150901-listings.result.tsv^
            --columnsHaveTypes^
            --username admin ^
            --password abc123! ^
            --authenticationDatabase admin ^
            --numInsertionWorkers 4

mongoimport --db TravelDb ^
            --collection Cities ^
            --type tsv ^
            --fieldFile cities5000-fields.txt^
            --file cities5000.txt ^
            --columnsHaveTypes^
            --username admin ^
            --password abc123! ^
            --authenticationDatabase admin ^
            --numInsertionWorkers 4

Fields files specifie the field names as well as their associated types. Using the option columnsHaveTypes we make the import with the types we need (e.g. int, double, string etc.).

The result should look like this:

MongoDB – LINQ support

The .NET Core solution included here follows the same structure as in my earlier article – Using MongoDB .NET Driver with .NET Core WebAPI . There, I have already presented a step by step guide on how to create an WebApi solution from scratch, connecting to MongoDB and implementing all the basic actions of a REST API.

In comparison, here, the web controller will implement just one action (GET) – focusing mainly just on running different queries:

[NoCache]
[HttpGet]
public Task> Get()
{
    return GetTravelItemsInternal();
}

private async Task> GetTravelItemsInternal()
{
    return await _travelItemRepository.GetTravelItems();
}

In background, the query runs using LINQ syntax, and it returns first 500 records.

public async Task> GetTravelItems()
{
    try
    {
        return await _context.TravelItems.Take(500).ToListAsync();
    }
    catch (Exception ex)
    {
        // log or manage the exception
        throw ex;
    }
}

The query is rendered on the server, and we receive just the limited set of data. This is possible since we have IQueryable type interface, provided natively by MongoDB C# Driver.

...
using MongoDB.Driver.Linq;
...
public IMongoQueryable TravelItems
{
    get
    {
        return _database.GetCollection("WikiVoyage").AsQueryable();
    }
}

How to find things to do in a specific city

Let’s assume we want to find the interesting things to do in a city. We either show all the items in the city, ordered by the type of action, or just select a specific action (e.g. buy, do, eat, drink etc.).

public async Task> GetTravelItems(string cityName, string action)
{
    try
    {
        if (action != null)
                    return await _context.TravelItems
                        .Where(p => p.City == cityName && p.Action == action).ToListAsync();

        return await _context.TravelItems.Where(p => p.City == cityName)
                    .OrderBy(p => p.Action)
                    .ToListAsync();
    }
    catch (Exception ex)
    {
        // log or manage the exception
        throw ex;
    }
}

This method will be called by a GET function. Assuming that we want to search after interesting things to do in Paris (http://localhost:61612/api/travelquery/Paris?doAction=do) we get interesting results, and one of them is the next:

Running faster the queries

One way to improve the speed of the queries is to apply an index. Searching within the collection after City and Action would recommend to add a simple index with these two fields.

Executing the JavaScript file with mongo shell, will add an index on City, and then Action.

db = db.getSiblingDB('TravelDb');
db.WikiVoyage.createIndex( { City: 1, Action: 1 } );

The speed of retrieval will increase from an average of 0.150 ms to about 0.001 ms.

Group items

What if we would like to see only headlines ? What types of actions are available for a specific city, without getting into details ?

A sample query, grouping by City and Action fields would be:

await _context.TravelItems
            .GroupBy(grp => new { grp.City, grp.Action })
            .Select(g => new { g.Key.City, g.Key.Action }).ToListAsync();

To continue

I would create a second part of this article, adding pagination support as well as aggregation enhancements brought by newer MongoDB versions, taking into consideration also the second dataset. Perhaps you knew these, maybe you learned a few things. Would you like to see something more covered ?

The post How to search good places to travel (MongoDb LINQ & .NET Core) appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/search-best-places-mongodb-linq-netcore/feed/ 10 579
The C4 software architecture model https://qappdesign.com/code/the-c4-software-architecture-model/ https://qappdesign.com/code/the-c4-software-architecture-model/#comments Mon, 23 Jan 2017 21:32:36 +0000 https://qappdesign.com/code/?p=348 Is there an easy way to succinctly and unambiguously communicate the architecture of a software system ? Something that could highlight the requirements, and still be brief ? The Agile Manifesto prescribes that teams should value working software over comprehensive documentation. This doesn’t mean that we should not create documentation; it just means we should create documentation that provides value and at the same time does not hinder the team’s progress. We can achieve this using C4 architecture model. It is a static model, that provides an easy way to communicate the design of the system to all involved, and also brings a natural narrative for exploring the architecture of a software solution. Starting from the highest level (what is the system and how does it provides value to the business), it drills into the details, until the very low level of functionality. It could be something to next car presentation, showing the relevant details from outside to inside : Source: Wired article (link) This architecture model has been created by Simon Brown, and you can find more details, and live presentations on his website simonbrown.je. Why such architecture model ? The C4 model is an hierarchical way to think […]

The post The C4 software architecture model appeared first on Cloud, Data and Integrations.

]]>
Is there an easy way to succinctly and unambiguously communicate the architecture of a software system ? Something that could highlight the requirements, and still be brief ?

The Agile Manifesto prescribes that teams should value working software over comprehensive documentation. This doesn’t mean that we should not create documentation; it just means we should create documentation that provides value and at the same time does not hinder the team’s progress. We can achieve this using C4 architecture model. It is a static model, that provides an easy way to communicate the design of the system to all involved, and also brings a natural narrative for exploring the architecture of a software solution. Starting from the highest level (what is the system and how does it provides value to the business), it drills into the details, until the very low level of functionality.

It could be something to next car presentation, showing the relevant details from outside to inside :
car layers
Source: Wired article (link)

This architecture model has been created by Simon Brown, and you can find more details, and live presentations on his website simonbrown.je.

Why such architecture model ?

The C4 model is an hierarchical way to think about the structures of a software system. Why such a model would be needed, since the existence of UML, or 4 + 1 architecture views (Wikipedia link) and the others ? I see next advantages:

  • Makes the diagrams easy to read – usually the diagrams that design a software system are part of the context in the documents, and it is harder to get the full meaning, without reading the full specification. C4 Model encourage to write succinct description text within the diagram, making them easy to comprehend and use, even outside of a documentation. This gives a chance to be easier used by other members of the team.
  • It has a role of zoom in / zoom out, providing the different amount of details, better suited to different persons / roles involved in the project. It starts from a context or general diagram, and go into the details of containers (one or more containers such as web applications, mobile apps, standalone applications, databases, file systems etc.). Each of containers has one or more components, which in turn are implemented by one or more classes.
  • Reduces the gap between design and actual implementation – Diagrams could be made in any tool. Even so, generating them using few lines of code, it makes possible to easier maintain them along the way the software product is developed. Here is the tool – structurizr.com

Arhitectural kata

To try this model, I would propose to start from a simple specification, initially designed for an architectural kata sessions. In this article I would be focused more on the way we graphically represent the system(s), rather to discuss how effectively the system is designed. In some parts, other technology selections would make more sense.

A national sandwich shop wants to enable “fax in your order” but over the Internet instead.

Users: millions+

Requirements: users will place their order, then be given a time to pick up their sandwich and directions to the shop (which must integrate with Google Maps); if the shop offers a delivery service, dispatch the driver with the sandwich to the user; mobile-device accessibility; offer national daily promotionals/specials; offer local daily promotionals/specials; accept payment online or in person/on delivery.

Source: Architectural katas – Neal Ford

C4 Model – Diagrams

      1. Context diagram
      2. Container diagram
      3. Component diagram(s)
      4. Class diagram(s)

Context diagram

“Draw a simple block diagram showing your software system as a box in the centre,
surrounded by its users and the other software systems that it interacts with. Detail isn’t
important here as this is your zoomed-out view showing a big picture of the system
landscape. The focus should be on people (actors, roles, personas, etc.) and software systems
rather than technologies, protocols and other low-level details. It’s the sort of diagram that
you could show to non-technical people.”
From “Software Architecture for Developers – Volume 2 – Simon Brown”

This diagram provides the big picture, and it answers to the following questions:

      1. What is the software system that we are building ?
      It presents the 3 systems to build (external – to the public users; internal – for order processing; and one optional for delivery management).
      2. Who is using it ?
      It presents 8 type of users, with a short description of their role (e.g. mobile user etc.), as well as their actions (e.g. place an order etc.).
      3. How does it fit in with the existing environment ?
      Here are included other systems to interact. It could be internal to the organization (e.g. ERP, CRM etc.), or external (e.g. Google Maps service)

These diagram reflects also the approach, via dependent systems – example: a serverless architecture would be represented in other way.

Note: The diagram is just a sample, and it does not reflect a real product. Different approaches could be considered. What it is more important here, is to see the diagrams with a lot of information included, not requiring extensive documents to explain them.


C4 Model - Context diagram
C4 Model – Context diagram

Second part, Container diagram

This is a simple, high-level technology focused diagram. It is a high-level shape of the software architecture and it shows how responsibilities are distributed across it. The diagram answers to the following questions:

      1. What is the overall shape of the software system ?
      2. What are the high-level technology decisions ?
      3. How are responsibilities distributed across the system ?
      4. How do containers communicate with one another ?
      5. As a developer, where do I need to write code in order to implement features ?

Note: The diagram is just a sample. Different technologies and approaches could be used. Maybe the technology stack used here is not what would you prefer. Still, having the full system (or subsystem, in case it is big) displayed on a page (A3 / A4) could bring clarity and make different decisions easier to take.


C4 Model – Container diagram

Even for very big systems, we could draw similar diagrams. Here is an example reflecting StackOverflow architecture (from Nick Craver – 2016 Architecture), which could be converted to a Container diagram. StackOverflow is a a system with 1.3 billion page views per month.


C4 Model – Container diagram – Stack exchange

Third part, Component diagram(s)

Next step would be to zoom in and decompose containers further, to show the inside components. This set of diagrams should answer to the following questions:

      1. What components/services is the system made up of?
      2. It is clear how the system works at a high-level?
      3. Do all components/services have a home (i.e. reside in a container)?

At this level, it would not be important to make all these diagrams from the beginning. These would be written as soon as certain components are developed. Then, once the components will be updated, the diagrams will reflect the changes.

Note: Maybe the components you would choose would be different. They could be made in other ways comparing with what is presented. Still, having the option to always have at hand the documents, and make zoom in / out using a document system such Confluence, or using Structurizr tool gives a lot of power to the team(s) that actually build the product.

Last part, Class diagrams

To keep it simple, diagrams at this level would be used just to illustrate specific details. These are standard UML diagrams and they could be generated with many many tools. Here is a small sample, starting from a sample provided by LucidChart tool (the same I have used to display also the other diagrams):


C4 Model – Class diagram

Common understanding and knowledge

Being used by one or more teams that build or maintain a product or software project, this model could generate a common understanding. Using the same diagrams, the team members start using the same vocabulary, giving a better chance to relate each other’s knowledge and expertise. In the end, this is a core advantage of the successful teams.

More on visually communicate the architecture

You can find a full presentation here, made by Simon Brown, who created this model:

Give it a try

Whenever you start a new project, try to make at least one context diagram and then a container diagram. Drawing them on paper, together with the other team members, could bring something new in the conversation, and make easier to explain the view of the system.

*The questions associated with each diagram are taken from Software Architecture for Developers – Volume 2.

The post The C4 software architecture model appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/the-c4-software-architecture-model/feed/ 7 348
Using MongoDB .NET Driver with ASP.NET Core MVC https://qappdesign.com/code/mongodb-aspnetmvc-core/ https://qappdesign.com/code/mongodb-aspnetmvc-core/#respond Thu, 19 Jan 2017 15:06:59 +0000 https://qappdesign.com/code/?p=492 Before starting This is a conversion of the original article Using MongoDB .NET Driver with .NET Core WebAPI to ASP.NET MVC. My intention would be to not repeat any of the points discussed there, and rather focus on how to have an ASP.NET Web App running. To install Here are all the things needed to be installed: Visual Studio Community 2015 and then Visual Studio 2015 Update 3 and .NET Core 1.0.1 – VS 2015 Tooling Preview 2 MongoDB and Robomongo Project available in GitHub Full source of this example is available on GitHub -> https://github.com/fpetru/mongodb-aspnetmvc. It includes creation of an ASP.NET Core Web Application, as well as copy of the existent code written. The project was creating by launching Visual Studio and then accessing menu: File > New Project > .Net Core > ASP.NET Core Web Application. Run the sample project After you have installed the required items, you would just need to have MongoDb server running, and to define the user credentials, to be able to access MongoDB. Once you have the credentials, write these settings in appsettings.json. You can find details on how to do this in the original article Using MongoDB .NET Driver with .NET Core WebAPI. Then just compile and run from […]

The post Using MongoDB .NET Driver with ASP.NET Core MVC appeared first on Cloud, Data and Integrations.

]]>
Before starting

This is a conversion of the original article Using MongoDB .NET Driver with .NET Core WebAPI to ASP.NET MVC. My intention would be to not repeat any of the points discussed there, and rather focus on how to have an ASP.NET Web App running.

To install

Here are all the things needed to be installed:

Project available in GitHub

Full source of this example is available on GitHub -> https://github.com/fpetru/mongodb-aspnetmvc. It includes creation of an ASP.NET Core Web Application, as well as copy of the existent code written. The project was creating by launching Visual Studio and then accessing menu: File > New Project > .Net Core > ASP.NET Core Web Application.

Run the sample project

After you have installed the required items, you would just need to have MongoDb server running, and to define the user credentials, to be able to access MongoDB. Once you have the credentials, write these settings in appsettings.json. You can find details on how to do this in the original article Using MongoDB .NET Driver with .NET Core WebAPI.

Then just compile and run from Visual Studio. From command line would be

dotnet restore
dotnet run

The page displayed in browser will look

Controller actions

Basically the change done is just in controller file (HomeController.cs), adding these actions.

public async Task Read()
{
	const string nodeId = "2";
	Note noteElement = await _noteRepository.GetNote(nodeId) ?? new Note();
	ViewData["Message"] = string.Format($"Note Id: {nodeId} - Body: {noteElement.Body}");

	return View();
}

public IActionResult Init()
{
	_noteRepository.RemoveAllNotes();
	_noteRepository.AddNote(new Note() { Id = "1", Body = "Test note 1", CreatedOn = DateTime.Now, UpdatedOn = DateTime.Now, UserId = 1 });
	_noteRepository.AddNote(new Note() { Id = "2", Body = "Test note 2", CreatedOn = DateTime.Now, UpdatedOn = DateTime.Now, UserId = 1 });
	_noteRepository.AddNote(new Note() { Id = "3", Body = "Test note 3", CreatedOn = DateTime.Now, UpdatedOn = DateTime.Now, UserId = 2 });
	_noteRepository.AddNote(new Note() { Id = "4", Body = "Test note 4", CreatedOn = DateTime.Now, UpdatedOn = DateTime.Now, UserId = 2 });

	ViewData["Message"] = string.Format($"Filled in 4 records");
	return View();
}

Unified frameworks

This conversion was very simple, due to fact that ASP.NET Core has unified the two frameworks, making it easy to build applications that include both UI (HTML) and APIs. Since they share the same code base and pipeline, the update was done just in the data is display, in the controller class.

The post Using MongoDB .NET Driver with ASP.NET Core MVC appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/mongodb-aspnetmvc-core/feed/ 0 492
Distributed cache using Redis and ASP.NET Core https://qappdesign.com/code/distributed-cache-using-redis-and-aspnet-core/ https://qappdesign.com/code/distributed-cache-using-redis-and-aspnet-core/#comments Wed, 21 Dec 2016 05:00:38 +0000 http://qappdesign.com/?p=220 What is Redis ? Redis is a super fast non-relational database that uses keys to map to different data types. It is a key value data store (NoSQL) allowing to solve efficiently many different problem sets. Redis was created by Salvatore Sanfilippo in 2009, and Sanfilippo still remains the lead developer of the project today. It is a mature and hugely popular open source project, being used by many companies and in countless mission-critical production environments. Here is an interview with the inventor of Redis, Salvatore Sanfilippo. Why is Redis popular? Not only is it extremely effective, but it is also relatively simple. Getting started with Redis is quite fast, and it usually takes only a few minutes to set up and get them working within an application. Thus, a small investment of time and effort can have an immediate, dramatic impact on performance of the application. Just to name two cases when Redis is helpful: Redis is used at Pinterest – see use case or at Twitter, where Raffi Kirkorian, VP of Engineering at Twitter, explains how Redis helps to support over 30 billion timeline updates per day based on 5000 tweets per second or 400,000,000 tweets per day […]

The post Distributed cache using Redis and ASP.NET Core appeared first on Cloud, Data and Integrations.

]]>
What is Redis ?

Redis is a super fast non-relational database that uses keys to map to different data types. It is a key value data store (NoSQL) allowing to solve efficiently many different problem sets. Redis was created by Salvatore Sanfilippo in 2009, and Sanfilippo still remains the lead developer of the project today. It is a mature and hugely popular open source project, being used by many companies and in countless mission-critical production environments.

Here is an interview with the inventor of Redis, Salvatore Sanfilippo.

Why is Redis popular?

Not only is it extremely effective, but it is also relatively simple. Getting started with Redis is quite fast, and it usually takes only a few minutes to set up and get them working within an application. Thus, a small investment of time and effort can have an immediate, dramatic impact on performance of the application.

Just to name two cases when Redis is helpful:

  • Redis is used at Pinterest – see use case
  • or at Twitter, where Raffi Kirkorian, VP of Engineering at Twitter, explains how Redis helps to support over 30 billion timeline updates per day based on 5000 tweets per second or 400,000,000 tweets per day – see here the presentation.

Easy access the code

The full client web application that display the Twitter messages could be found on Github – https://github.com/fpetru/redis-aspnetcore-caching.

To install

Here are all the things needed to be installed locally:

Installing Redis Server on Windows

Create a new ASP.NET Core project, and from Nuget select for installing Redis-64. The server is installed in the default Nuget path. To start the server run next in command prompt:

C:\Users\[logged in user]\.nuget\packages\Redis-64\[version installed]\tools>redis-server.exe
# in my case
C:\Users\petru\.nuget\packages\Redis-64\3.0.503\tools>redis-server.exe

redis-server

In the same folder, there is a document describing how to install Redis as a Windows Service (check the file Windows Service Documentation.docx). For running the simple scenarios from this article, running just from command line should be enough.

Caching in ASP.NET Core using Redis

To use Redis in ASP.NET Core, we need to reference Microsoft.Extensions.Caching.Redis.Core package. Additionally, in our sample we would need also the Session package. Installing these could be done either using Nuget or extending the project.json file:

"Microsoft.Extensions.Caching.Redis.Core": "1.0.3",
"Microsoft.AspNetCore.Session": "1.1.0"

To enable Redis in the application we need to add AddDistributedCache method in ConfigureServices method.

public void ConfigureServices(IServiceCollection services)
{
    services.AddDistributedRedisCache(options =>
    {
        options.InstanceName = "Sample";
        options.Configuration = "localhost";
    });
    services.AddMvc();
}

To enable the session, we need to do changes in both ConfigureServices and Configure:

public void ConfigureServices(IServiceCollection services)
{
    services.AddDistributedRedisCache(options =>
    {
        options.InstanceName = "Sample";
        options.Configuration = "localhost";
    });

    services.AddSession();
    services.AddMvc();
}

public void Configure(IApplicationBuilder app, 
    IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    app.UseSession();
    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

How do we access Redis?

To cache a value in Redis we use:

var valueToStoreInRedis = Encoding.UTF8.GetBytes("This is a cached value from Redis");
HttpContext.Session.Set("TestProperty", valueToStoreInRedis);

To retrieve the value from cache we use:

var valueFromRedis = default(byte[]);
if (HttpContext.Session.TryGetValue("TestProperty", out valueFromRedis))
    valueToDisplay = Encoding.UTF8.GetString(valueFromRedis);

Twitter client example

External interfaces to connect to social platforms are usually relatively slow to access. If we would do a web application showing messages from Twitter, it would take a couple of seconds to load the page (to make the search and retrieve). If the same user would connect again to our application, Redis could retrieve from memory the same messages, without accessing the server again. Caching the results, and updating them only when new messages appear, bring a huge improvement to the overall performance.

To keep the things simple, I have earlier written a small client in Python, that connects to Twitter and save the details in a JSON file. You can read more details in the article Visual Studio Code – Connect to Twitter with Python. It includes the full source code, and very fast you can access and save data in a JSON file.

In our sample here, we start from an already available JSON file.

[
 {
        "Id": 0,
        "ProfileImage": "https://pbs.twimg.com/profile_images/1772973596/inogiclogo_normal.png",
        "ProfileDescription": "Microsoft Partner with Gold Competency in #Dynamics365 #MSDynCRM providing Solns/Services. Innovators of @Maplytics & Inolink #QuickBooks Int #MSDyn365 #MVPBuzz",
        "Username": "Inogic",
        "Text": "Execute the Global Action Using Web API in Dynamics CRM https://t.co/DAuzP6L7FE #MSDyn365 #webapi https://t.co/v0XgyotaFn",
        "ScreenName": "@inogic"
    },
    ....
]

Retrieval of data from Twitter takes a couple of seconds. To simulate this, we add a delay of 20 seconds in our code. Here is the code from the controller.

public IActionResult Index()
{
	var watch = Stopwatch.StartNew();
	string jSONText = RetrieveOrUpdateRedis();
	watch.Stop();

	TempData["DataLoadTime"] = watch.ElapsedMilliseconds;
	var itemsFromjSON = JsonConvert.DeserializeObject>(jSONText);
	return View(itemsFromjSON);
}

private string RetrieveOrUpdateRedis()
{
	var valueFromRedis = default(byte[]);
	string valueToReturn = string.Empty;
	if (HttpContext.Session.TryGetValue("TwitterDataset", out valueFromRedis))
	{
		// Retrieve from Redis
		valueToReturn = Encoding.UTF8.GetString(valueFromRedis);
		TempData["DataLoadType"] = "From Redis";
	}
	else
	{
		// read the file and update the URLs
		var jSONText = System.IO.File.ReadAllText("twitter.json");
		valueToReturn = GetUpdatedFileContent(jSONText);

		// add an artificial delay of 20 seconds (simulating the search is done directly against a Twitter server)
		Thread.Sleep(20000);

		// store values in Redis
		var valueToStoreInRedis = Encoding.UTF8.GetBytes(valueToReturn);
		HttpContext.Session.Set("TwitterDataset", valueToStoreInRedis);
		TempData["DataLoadType"] = "From file";
	}

	return valueToReturn;
}

// a minimum data processing, updating the URLs
private string GetUpdatedFileContent(string jSONText)
{   
	var itemsFromjSON = JsonConvert.DeserializeObject>(jSONText);
	foreach (var item in itemsFromjSON)
	{
		Regex r = new Regex(@"(https?://[^\s]+)");
		item.Text = r.Replace(item.Text, "$1");
	}

	return JsonConvert.SerializeObject(itemsFromjSON);
}

How fast does the application run with Redis?

The initial time to load is displayed below (this includes also the artificial timeout of 20 seconds):

Just taking the results from Redis runs much faster. This exclude any “connection” to Twitter or any other internal update. Here is the print screen with the data retrieved from memory:

To replicate this multiple times, and not just at the beginning, use command flushall to clean entire Redis cache:

C:\Users\[logged in user]\.nuget\packages\Redis-64\[version installed]\tools>redis-cli.exe

# in my case
C:\Users\petru\.nuget\packages\Redis-64\3.0.503\tools>redis-cli.exe

# and then
127.0.0.1:6379> flushall

Is Redis just a cache? When to use it?

Redis is much more than just a cache. Like any cache, Redis stores indeed [key, value] pairs. More than this, Redis allows us to use values in a very efficient ways. Using different data structures (not just strings), we gain a lot of power (such as the ability to fine-tune cache contents and durability) and greater efficiency overall. Once we start using the data structures, the efficiency boost becomes tremendous for specific application scenarios.

Here is a summary of the data structures, along with few concrete use cases associated with each type:

Uses
Strings Caches, counters, and clever bit-operations.

Session management is key to many
online application. Their success depends by responsiveness and accuracy of data
displayed, just to name here shopping carts for eCommerce websites. Distributed cache,
it is a great alternative to the local cache.

Lists Queues, stacks and cyclical lists. All these help to decouple processes, while receiving
many new requests.

A simple example would be an web application that is used to place
orders. The processing of the orders may take a while, and these
could be decoupled using queues.

Hashes These are field-value pairs, very efficient when storing many small values.

An example would be for implementing of a black lists checking feature.
Or accessing very fast lots of images from a gallery.

Sets Are very good for expressing relations between objects.

These could make easy implementation of social tags in social networking
applications, or discussion tags in blogs etc.

Sorted Sets These are a mix between sets and Hashes. One example would be online games, where
scoreboards and player statistics are vital.
HyperLogLog Counting unique things. A simple use case, counting unique queries
performed by users in a search form during the day.

The post Distributed cache using Redis and ASP.NET Core appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/distributed-cache-using-redis-and-aspnet-core/feed/ 4 220
Visual Studio Code – Connect to Twitter with Python https://qappdesign.com/code/vscode-twitter-python-json/ https://qappdesign.com/code/vscode-twitter-python-json/#respond Tue, 13 Dec 2016 07:00:08 +0000 http://qappdesign.com/?p=273 Visual Studio Code is a cross platform editor that supports multiple programming languages. Combining with Python, and its associated 3rd party packages that wrap Twitter’s API, we can easy connect to Twitter, read and use the data in just few lines of code. Why Twitter ? Twitter is a popular social network where users can share short SMS-like messages called tweets. Users share thoughts, links and pictures on Twitter, journalists comment on live events. The list of different ways to use Twitter could be really long, and with 500 millions of tweets per day, there’s a lot of data to play with. Using an excellent Python library – TwitterAPI as a minimal wrapper to native Twitter API calls, we can have a tool to extract the information we need. Getting started At first, you will need to create a Twitter account and then a Twitter application. Once you have an account, you’ll have to go to their developer’s site, in the Application Management area and create a new application. After your application is created, you will need to get your API keys (or generate some) and also generate access tokens. Download and install first Python 3 and then Visual Studio […]

The post Visual Studio Code – Connect to Twitter with Python appeared first on Cloud, Data and Integrations.

]]>
Visual Studio Code is a cross platform editor that supports multiple programming languages. Combining with Python, and its associated 3rd party packages that wrap Twitter’s API, we can easy connect to Twitter, read and use the data in just few lines of code.

Why Twitter ?

Twitter is a popular social network where users can share short SMS-like messages called tweets. Users share thoughts, links and pictures on Twitter, journalists comment on live events. The list of different ways to use Twitter could be really long, and with 500 millions of tweets per day, there’s a lot of data to play with.

Using an excellent Python library – TwitterAPI as a minimal wrapper to native Twitter API calls, we can have a tool to extract the information we need.

Getting started

At first, you will need to create a Twitter account and then a Twitter application. Once you have an account, you’ll have to go to their developer’s site, in the Application Management area and create a new application. After your application is created, you will need to get your API keys (or generate some) and also generate access tokens.

Download and install first Python 3 and then Visual Studio Code. Our example will work with minor changes also for Python 2.

Once you first open the VS Code, open the Extensions list and install Python extension.

Visual Studio Code - Python plugin

Download from GitHub the code (click to download) and extract the archive to a local folder. Then use the option File -> Open Folder in VSCode.

VSCode Python Twitter

To install the Twitter library, open the command prompt and run each of the next install commands. You could also run these lines within VSCode. It has integrated the command prompt, and you can easy access it by clicking on the menu item: View -> Integrated terminal.

pip install TwitterAPI
pip install configparser

Connecting to Twitter

To be able to do this we will use the TwitterAPI library. With credentials from Twitter, we can access the Twitter data:

from TwitterAPI import TwitterAPI
connectTwitter = TwitterAPI(consumer_key, consumer_secret, access_token_key, access_token_secret)

Then we can search and extract results in successive batches or pages. In this example searching after ‘#webapi’ string, retrieving the results in English, in batches of 10 results.

twitterPage = TwitterRestPager(connectTwitter, 'search/tweets', \
                               {'q':'#webapi', 'count':10, 'lang': 'en'})

And in the end we can iterate through search results

for item in twitterPage.get_iterator():
   print('Tweet: ', item['text'], ' from ', item['user']['name'])

To summarize, here is the full code allowing to connect, and retrieve the details from Twitter:

from TwitterAPI import TwitterAPI
from TwitterAPI import TwitterRestPager

configValues = configparser.RawConfigParser()
configValues.read(r'.\config.real.ini')

# connecting to Twitter - replace parameters with values
connectTwitter = TwitterAPI(consumer_key, consumer_secret,  
                            access_token_key, access_token_secret)

# search tweets
twitterPage = TwitterRestPager(connectTwitter, \
                               'search/tweets', \
                               {'q':'#webapi', 'count':10, 'lang': 'en'})

responseValues = []

# if connected successfully, retrieve paginated search results
for item in twitterPage.get_iterator():
    if 'user' in item and 'text' in item:
        responseValues.append({'username': item['user']['name'],
                               'screen_name': '@{}'.format(item['user']['screen_name']),
                               'profile_image': item['user']['profile_image_url_https'],
                               'profile_description': item['user']['description'],
                               'text': item['text']})
        print('Tweet ', len(responseValues), ' from  ', item['user']['name'])
    elif 'message' in item and item['code'] == 88:
        print('SUSPEND, RATE LIMIT EXCEEDED: %s\n' % item['message'])
        break

    # stop after first 50 tweets, to not exceed the limit too fast
    if len(responseValues) > 50:
        break

Save the results to a json file

Saving in json format brings many way to use effectively the data. This could be done using a simple command from json package.

import json
with open(r'.\twitter.txt', 'w') as outfile:
    json.dump(responseValues, outfile, indent=4)

Load the credentials from a configuration file

Loading the connection credentials from a configuration file, could be easily done in Python. Here is a sample configuration sample:

[TwitterSettings]
consumer_key = aaa
consumer_secret = bbb
access_token_key = ccc
access_token_secret = ddd

To read any of the associated configuration keys we run next commands:

import configparser

configValues = configparser.RawConfigParser()
configValues.read(r'.\config.ini')

print configValues.get('TwitterSettings', 'consumer_key')

Calling the TwitterAPI constructor using the settings from a configuration files becomes:

connectTwitter = TwitterAPI(configValues.get('TwitterSettings', 'consumer_key'),
                            configValues.get('TwitterSettings', 'consumer_secret'),
                            configValues.get('TwitterSettings', 'access_token_key'),
                            configValues.get('TwitterSettings', 'access_token_secret'))

Debug the code with VS Code

We can run the above code and easily debug in VS Code. To be easier, dowload the code and open the VS Code debugger, selecting Python as language.

VSCode Python Debug

Python plugin for VSCode comes with linting support, which is source code, bug and quality checker, following the style recommended by Python style guide (PEP-8). Pylint quickly highlights the errors and it shows recommendations on how to write the code.

Or run directly

The other quick way to run the code, without debug, would be to use the combination CTRL-B. This uses the configuration file task.json, saved in folder .vscode and run automatically the code file using Python.

{
    "command": "python",
    "showOutput": "always",
    "windows": {
        "command": "python.exe" 
    },
    "args": ["${file}"]
}

Final results

Using the above code, we could retrieve tweets like the ones displayed below

[
    {
        "profile_image": "https://pbs.twimg.com/profile_images/802976662187114496/gif5G5oY_normal.jpg",
        "text": "RT @techjunkiejh: Chart Widgets With Server Side Data In MVC Using #AngularJS And #WebAPI https://t.co/MfW9zi431f #javascript https://t.co/\u2026",
        "screen_name": "@Phunick",
        "profile_description": "Software engineer | Consulting | Architect | Developer Expert: MCP, MCSD, MSSQL,BigData, AngularJS,TypeScript,Nodejs,Reactjs.Telerik&...",
        "username": "Nick"
    }
]

Project available in GitHub

This code could be access also from GitHub – https://github.com/fpetru/python-twitter-vscode

The post Visual Studio Code – Connect to Twitter with Python appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/vscode-twitter-python-json/feed/ 0 273
Angular 2 with ASP.NET Core Web API – Build a simple Notebook app – Part 1 https://qappdesign.com/code/getting-started-with-angular2-with-aspnet-core-webapi-build-notebook-app/ https://qappdesign.com/code/getting-started-with-angular2-with-aspnet-core-webapi-build-notebook-app/#comments Thu, 24 Nov 2016 15:50:12 +0000 http://qappdesign.com/?p=118 This article presents a step by step approach to create an Angular2 application, consuming an ASP.NET Core REST WebAPI. It continues the earlier post Using MongoDB with ASP.NET Core WebAPI and presents a gentle introduction to Angular 2 framework. This is the first part and the scope of the series is to present step by step how to build a web application to store your ideas in an easy way, adding text notes, either from desktop or mobile, with few characteristics: run fast, save on the fly whatever you write, and be reasonably reliable and secure. Topics covered: Angular 2 Module Angular 2 Component Angular 2 Dependency Injection Angular 2 Lifecycle Angular 2 Service and Observable Make an Angular 2 application connected to an ASP.NET Core WebApi project Why choosing Angular 2 ? Angular2 is a framework that provides lots of functionality out of the box. It has libraries for routing, webapi calls, dependency management and so on. Angular2 has also embraced TypeScript. TypeScript is a superset of JavaScript. It is very well integrated with Visual Studio, helping while you type with suggestions, and spotting errors as you type the code. New version of Angular provides a more consistent and […]

The post Angular 2 with ASP.NET Core Web API – Build a simple Notebook app – Part 1 appeared first on Cloud, Data and Integrations.

]]>
This article presents a step by step approach to create an Angular2 application, consuming an ASP.NET Core REST WebAPI. It continues the earlier post Using MongoDB with ASP.NET Core WebAPI and presents a gentle introduction to Angular 2 framework.

This is the first part and the scope of the series is to present step by step how to build

a web application to store your ideas in an easy way, adding text notes, either from desktop or mobile, with few characteristics: run fast, save on the fly whatever you write, and be reasonably reliable and secure.

Topics covered:

  • Angular 2 Module
  • Angular 2 Component
  • Angular 2 Dependency Injection
  • Angular 2 Lifecycle
  • Angular 2 Service and Observable
  • Make an Angular 2 application connected to an ASP.NET Core WebApi project

Why choosing Angular 2 ?

Angular2 is a framework that provides lots of functionality out of the box. It has libraries for routing, webapi calls, dependency management and so on. Angular2 has also embraced TypeScript. TypeScript is a superset of JavaScript. It is very well integrated with Visual Studio, helping while you type with suggestions, and spotting errors as you type the code.

New version of Angular provides a more consistent and compact development experience. It’s also faster, provides server-side rendering out of the box, is cross-platform, supports legacy browsers, etc, etc.

Live code samples using Plunker

Even if Angular 2 framework has advantages, starting a solution combined with ASP.NET Core seems difficult at the beginning. To make things easier, each section comes together with a live Angular 2 sample. These evolve from a simple “hello world” type, to reading the data from WebAPI. You could view them while reading the article, or quick access from this summary:

Easy access the code

The blog starts, with a pre-configured ASP.NET Core solution, and finalizes with an application that connects to WebAPI. You can access both projects, using the link below:

To install

Here are all the things needed to be installed locally:

Getting Started With Angular 2 and TypeScript

The best way to get started learning Angular 2 and TypeScript is to clone an application starter, a minimalist Angular 2 app that has the full set up for Angular 2, with TypeScript and the module loader.

First things first. Once all the items presented above are installed, we would need to be sure that node and npm versions are correct. The Angular 2 runs with node v4.x.x or higher and npm 3.x.x or higher.

To be able to check the version, I have opened a command prompt and ran:

c:\windows\system32>npm --version
3.10.8
c:\windows\system32>node --version
v6.9.1

My local settings fulfill the minimum requirements. If you have issues with the npm version, please refer to next article.

Once these versions are correct, we could further proceed to clone the initial project from Github (or simpler download it).

Last thing before opening the initial solution, would be to install Gulp. Run this command in command prompt, in the project folder:

npm install gulp --save-dev

This command will install locally Gulp. The starting project has all the configurations in place, with a fully configured Gulp file.

Open the solution in Visual Studio. Depending on the speed of the internet connection, it could take a little while to download locally all the required packages. Once these are complete, probably in few minutes, build the solution and run it.

Here are the files added or updated, enabling us to run Angular 2 with an ASP.NET Core MVC

// Angular 2 source code details
- app folder            # Angular2 source folder
-  /css & /js           # Folders for CSS & additional Java script files
- note.app.module.ts    # The Angular module & sample component

// ASP.NET Core - Updated files 
- Controllers/HomeController.cs     # Default ASP.NET MVC Controller, at the beginning with no change
- Views/Home/Index.cshtml           # Default ASP.NET MVC View, loads the Angular2 module
- Startup.cs                        # Make the static files servable 

// TypeScript configuration
- gulpfile.js           # Gulp configuration file for automating the deployment flow
- hosting.js            # Configuration file added to run the application on a specific port
- package.json          # NPM could identify the project as well as handle the project's dependencies.
- systemjs.config.js    # Allows to configure SystemJS to load modules compiled using the TypeScript compiler.
- tsconfig.json         # TypeScript compiler configuration
- typings.json          # TypesScript declaration files

Module, Bootstrapping, Components

  • In Angular 1 we used the ng-app directive to point Angular to the starting point of your application. In Angular 2 we use a bootstrapper. Angular 2 is platform agnostic. We can run it in the browser, but we could also run on a web worker, in the server and potentially native in mobile devices using different bootstrappers.
  • platformBrowserDynamic().bootstrapModule(AppModule);
    
  • Angular 2 modules and the new NgModule decorator let us declare in one place all the dependencies and components of our application without the need to do it on a per-component basis (like we used to in previous versions). Here we tell that component App should be loaded first.
  • @NgModule({
        imports: [BrowserModule],
        declarations: [App],
        bootstrap: [App]
    })
    
    export class AppModule { }
    
  • The component is a reusable piece of UI, displayed by a custom html element. It is self contained and is constituted by at least a piece of html code that is known as template, a class that encapsulates the data and interactions available to that template, and the aforementioned html element also known selector.
  • @Component({
        selector: 'notes-app',
        template: `<div>
                  <h2>NotebookApp with {{name}}</h2>
                   </div>`
    })
    
    export class App {
        name: string;
        constructor() {
            this.name = 'Angular 2';
        }
    }
    
  • To be displayed, we include the new HTML tag in the ASP.NET Core View
  • <notes-app></notes-app>
    
    <script>
        System.import('dist/note.app.module')
              .catch(function (err) { console.error(err); });
    </script>
    

You can see the results by running the solution, or use first sample from Plunker: Preview A – Angular 2 – Starter application

Adding the first component: Listing the notes

The first thing that we are going to do is not going to require services, not yet. We are going to create our first component to display a list of notes and we will start faking out that data.

It’s good practice to start by defining the domain model of our problem space, in this case a NoteItem. We’ll take advantage of TypeScript interface and create a NoteItem within the noteModel.ts file. To keep the things simple, we will have all the fields, including date, as strings for the moment. Component is named: NotesComponent

export interface NoteItem {
    Id: string,
    Body: string,
    UpdatedOn: string,
    CreatedOn: string,
    UserId: number
}

To iterate through a list of notes, we will use *ngFor, which is a repeater directive. The code snippet will show like this

  <ul>
    <li *ngFor="let note of noteItems">
      {{note.Body}}
    </li>
  </ul>

And now we update the component NotesComponent and display the data

import { Component } from '@angular/core'
import { NoteItem } from './note.model'

@Component({
  selector: 'notes-app',
  template: `
  <ul>
    <li *ngFor="let note of noteItems">
      {{note.Body}}
    </li>
  </ul>
  `
})
export class NotesComponent {
  noteItems: NoteItem[] = [
    {Id:'1', Body: 'First note', UpdatedOn: '2016-11-21 10:20:23', CreatedOn: '2016-11-21 10:20:23', UserId: 1},
    {Id:'2', Body: 'Second note with more details', UpdatedOn: '2016-11-21 10:20:23', CreatedOn: '2016-11-21 10:20:23', UserId: 1},
    {Id:'3', Body: 'Third note, and the last sample', UpdatedOn: '2016-11-21 10:20:23', CreatedOn: '2016-11-21 10:20:23', UserId: 1},
  ];
}

You can see the same Angular 2 code in Plunker: Preview B – Angular 2 – First component.

Dependency Injection and common settings

To easier to present the Dependency Injection (DI) in Angular 2, let’s use a common settings class, that needs to be accessed by other components. The way it is built could be further extended (ex: read from a configuration file), but the simpler model helps us better to present the Dependency injection in Angular 2.

Let’s start from a simple class:

export class Configuration {
    public ApiServer: string = "http://localhost:6001/";
    public ApiUrl: string = "api/notes";
    public ServerWithApiUrl: string = this.ApiServer + this.ApiUrl;
}

To make it accessible by other components via DI, we make two changes:

  • Make the class injectable
  • import { Injectable } from '@angular/core';
    
    @Injectable()
    export class Configuration {
        public ApiServer: string = "http://localhost:6001/";
        public ApiUrl: string = "api/notes";
        public ServerWithApiUrl: string = this.ApiServer + this.ApiUrl;
    } 
    
  • and then make it available as a provider, in the module configuration
  • @NgModule({
        imports: [BrowserModule],
        declarations: [NotesComponent],
        providers: [Configuration],
        bootstrap: [NotesComponent]
    })
    

These updates allow us to inject Settings, via constructor, into our component.

export class NotesComponent {
    constructor(private _dataService: NoteService) {
    }

See this live in Plunker: Preview C – Angular 2 – Injectable

Using Angular 2 component lifecycle

When a component is created, its constructor is called, and we initialize our component. If we rely on properties or data from other components, then we need to wait for the other components to initialize first. To be able to do this we use ngOnInit lifecycle. This will allow us to call the WebApi, whenever this service is initialized.

import { Component, OnInit } from '@angular/core';

...

export class NotesComponent implements OnInit {
    ngOnInit() {
       // access the WebAPI service    
    }
}

Creating an Angular 2 service

Angular 2 Service is just an ES6 class that encapsulates functionality. It is used by the rest of the application, and it is referred as a service.

In the example below, we create a service that uses a native Angular 2 http service, and allow us to receive the json details. The class is also marked as injectable to be easier accessed and used.

import { Injectable } from "@angular/core";
import { Http } from "@angular/http";
import "rxjs/add/operator/map";
import { Observable } from "rxjs/Observable";
import { NoteItem } from "../../models/note/noteModel";
import { Configuration } from "../../app.constants";

@Injectable()
export class NoteService {
    constructor(private _http: Http, private _configuration: Configuration) {
    }

    public getAll = (): Observable<NoteItem[]> => {
        return this._http.get(this._configuration.ServerWithApiUrl) 
            .map(data => data.json());
    };
}

We use more terms in the above code snippet, and here are few details:

  • Angular 2 http client service provides the support to make HTTP requests, and comes with all methods corresponding to HTTP verbs like: get, post, put etc.
  • Observable is the asynchronous pattern used in Angular 2. The concept of observable comes from the observer design pattern as an object that notifies interested set of observers when something happens. In RxJs it has been generalized to manage sequences of data or events, to become composable with other observables and to provide a lot of utility functions known as operators.
  • map transforms the items within a sequence into the domain model of our application – in our case noteItems.

To use the pattern, we should subscribe to the observable. We do this with .subscribe to Observable. Once the details are received asynchronously, we fill in the local variable myItems.

export class NotesComponent implements OnInit {
    public myItems: NoteItem[];

    constructor(private _dataService: NoteService) {
    }

    ngOnInit() {
        this._dataService
            .getAll()
            .subscribe((data: NoteItem[]) => this.myItems = data,
            () => console.log("getAllItems() complete from init"));
    }
}

Making just a retrieval (GET), we could simulate the external service by reading a JSON file. See in Plunker the concept of Angular 2 service: Preview D – Angular 2 service connected to a REST WebAPI.

Error handling with Observables

First level of error handling should happen at the service level. At this lower level could be managed problems related to HTTP requests. In this simple application, we will just log the error, and transform it into an application level error:

export class NotesComponent implements OnInit {
    public myItems: NoteItem[];

    constructor(private _dataService: NoteService) {
    }

    ngOnInit() {
        this._dataService
            .getAll()
            .subscribe((data: NoteItem[]) => this.myItems = data)
            .catch(handleException)                                     
            () => console.log("getAllItems() complete from init"));
    }
}

function handleException(error: any) {
  // log error
  let errorMsg = error.message || `Problem accessing the data!`
  console.error(errorMsg);

  // throw an application level error
  return Observable.throw(errorMsg);
}

Let’s add an internal rest service, using an ASP.NET Core WebAPI controller

Before accessing the other ASP.NET Core project, we could easily simulate by adding a new controller in our ASP.NET project. This helps us to easier run and test, using a single solution. The full source code of this controller is included here (you could also download the full project).

using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;

namespace NotebookAppWeb.Controllers
{
    [Produces("application/json")]
    [Route("api/[controller]")]
    public class NotesController : Controller
    {
        private class NoteItem
        {
            public string Id;
            public string Body;
            public string UpdatedOn;
            public string CreatedOn;
            public int UserId;
        }

        // GET: api/values
        [HttpGet]
        public string Get()
        {
            NoteItem[] arrayOfNotes = new NoteItem[] { 
                  new NoteItem() 
                  { Id = "1", 
                    Body = "Hello note !", 
                    UpdatedOn = "2016-11-16 10:50:23", 
                    CreatedOn = "2016-11-16 10:50:23", 
                    UserId = 1 
                  },
                  new NoteItem() 
                  { Id = "2", 
                    Body = "Hello 2 should come after", 
                    UpdatedOn = "2016-11-16 10:50:23", 
                    CreatedOn = "2016-11-16 10:50:23", 
                    UserId = 2 
                  },
                  new NoteItem() 
                  { Id = "3", 
                    Body = "Hello 3 should come latest", 
                    UpdatedOn = "2016-11-17 10:50:23", 
                    CreatedOn = "2016-11-17 10:50:23", UserId = 3 
                  }};

            return JsonConvert.SerializeObject(arrayOfNotes);
        }
    }
}

Putting things together

We can now create an application, which connect all the concepts presented above and simulate a basic Notebook application. This application connects to local controller as a REST service (Web API controller), and then display the notes received.

Download the full ASP.NET Core project (GitHub).

Connecting to the project ASP.NET Core WebAPI and MongoDB

Connect to the ASP.NET Core WebAPI project. Open the Github page – https://github.com/fpetru/WebApiMongoDB, and in the project description you can find how to run the project.

Once this is setup, change the configurations to point to this REST service (instead of using local controller). Run the project.

What’s next

This will continue with a new part, presenting all the actions on the notes. We will be able then to add, edit or remove notes.

The post Angular 2 with ASP.NET Core Web API – Build a simple Notebook app – Part 1 appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/getting-started-with-angular2-with-aspnet-core-webapi-build-notebook-app/feed/ 3 118
Why this blog ? https://qappdesign.com/code/why-this-blog/ https://qappdesign.com/code/why-this-blog/#comments Mon, 24 Oct 2016 21:43:00 +0000 http://qappdesign.com/?p=24 I had the idea of starting the idea of starting a blog earlier, but it never came into practice. Currently, I am enjoying a lot the blog course from John Sonmez – Simple Programmer, and I would also like to give a new try to this idea. What should the blog have? The blog will focus on lightweight approach to software design and software architecture. To be able to try multiple ideas, I would like to create a space for software architecture katas. These would be presented end to end – as an exercise to try new things, receive feedback and extend knowledge. Furthermore, I would like to gather and share excellent applications designs. Still, this is not new. Why this idea? Usually, you can easily find how to build applications, including technologies described in detail. I find much harder a place to present concrete software designs in a simple way, presenting why certain technologies or approaches are used. What do you need to have a faster, more secure, more reliable solution? What are the disadvantages if some excellent individual technologies are coupled together? I hope you would enjoy reading the posts, Thanks Petru Faurescu

The post Why this blog ? appeared first on Cloud, Data and Integrations.

]]>

I had the idea of starting the idea of starting a blog earlier, but it never came into practice. Currently, I am enjoying a lot the blog course from John Sonmez – Simple Programmer, and I would also like to give a new try to this idea.

What should the blog have?

The blog will focus on lightweight approach to software design and software architecture. To be able to try multiple ideas, I would like to create a space for software architecture katas. These would be presented end to end – as an exercise to try new things, receive feedback and extend knowledge. Furthermore, I would like to gather and share excellent applications designs.

Still, this is not new. Why this idea?

Usually, you can easily find how to build applications, including technologies described in detail. I find much harder a place to present concrete software designs in a simple way, presenting why certain technologies or approaches are used. What do you need to have a faster, more secure, more reliable solution? What are the disadvantages if some excellent individual technologies are coupled together?

I hope you would enjoy reading the posts,

Thanks
Petru Faurescu

The post Why this blog ? appeared first on Cloud, Data and Integrations.

]]>
https://qappdesign.com/code/why-this-blog/feed/ 4 24