Binary Noggin https://binarynoggin.com/ Thu, 02 Mar 2023 20:32:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.5 https://binarynoggin.com/wp-content/uploads/2021/01/cropped-BN_1-1-32x32.png Binary Noggin https://binarynoggin.com/ 32 32 Team Retrospectives: A How-To Guide for Team Building https://binarynoggin.com/blog/team-retrospectives-a-how-to-guide-for-team-buildling/ Thu, 02 Mar 2023 20:25:16 +0000 https://binarynoggin.com/?p=3121 The post Team Retrospectives: A How-To Guide for Team Building appeared first on Binary Noggin.

]]>

Team retrospectives are a great way to check in with your team to see how they are doing and how the project is progressing. Retros usually happen between “iterations,” and take place bi-weekly or monthly. They provide time for everyone to gather in person or online and review the goals set for that iteration. Everyone is on equal footing during this meeting. It is a safe space to be open and constructive with each other to help move everyone forward. It’s typically done in person but is easily transferable to Zoom or Google Meet. If the retro is for an iteration of one to two weeks, it’s best to keep the retrospective no longer than an hour. 

I am a Retro Facilitator, and this is how I ran a recent retrospective with our clients. 

 

Greeting (5 minutes) 

  • I welcome everyone to the retro and do some small talk while we wait for any last-minute stragglers. 

 

Game (5 minutes)

  • After everyone has settled in, I guide them through a game to get everyone talking and the creative juices flowing. 
  • This time, I choose Giberish Dictionary. I have everyone take turns giving a made-up word and their definition for the made-up word. It generates some laughs from those participating.

 

Review Working Agreements and goal (1 minute)

  • Before we get into the activity, we review the working agreements that the team had put together during our first meeting. These are the guidelines that everyone created to provide security and freedom within the retros. 
  • Then, I go over the agreed upon goals from the last retro which will be the topic of the activity.  

 

 Activity (2 minutes explaining, 15 minutes) 

  • Alright, let’s get to the meat of the retro, the activity. Here, we digest the information from the iteration and see what we can do to move forward. 
  • There are plenty of websites that have great activities your team can do, but when you run retros regularly, you can make them up as you go.
  • Today’s activity is one that I made up myself. It’s called Red Light, Green Light, Yellow Light, Turn Signal. It’s a bulky name, I know, but it lends itself to the interpretation. 
  • If you are in person, this game would require a whiteboard and some sticky notes. When I do mine online, I use an online post-it note website that allows for live interaction among the participants. I send the link to the board I made ahead of time, just before the meeting starts. 

 

Team Retrospective Activity

 

  • Once everyone has joined, I explain the activity. 
  • The Red Light signifies the things the team did that stopped forward movement. 
  • Green Light is the actions the team did to keep the iteration going. 
  • Yellow Light stands for what slowed the team down from reaching the goal of the iteration. 
  • The Turn Signal is for the things that got the team off track and away from reaching the goal. 
  • After explanations, I step back and let the team go to work on writing their sticky notes and placing them under each category. I make sure to set a timer so that we don’t enjoy the silence for too long!
  • Important note: Even if everyone seems to be done early, I let the time conclude to give team members a chance to think over the details of the past weeks.

 

Generate insights (10-15 minutes) 

  • Once the timer goes off, I let everyone finish up what they were writing and move on to the review. 
  • We go through the list of post-its one by one and let the team decide if they want to add or elaborate on any of the topics. This is one of my favorite parts of the retro because it can get a good discussion going on what problems have been seen or seeing the joy in a team member’s face when they’re accomplishments have been acknowledged. 

 

Decide what to do (10 minutes) 

  • After looking at the results of the last step, the changes that need to be made can be pretty obvious from the Red Light, Yellow Light, or Turn signal categories. 
  • Now is the to decide what problems need to be addressed during the next iteration. When there are too many problems to reasonably work on, a vote or a discussion is held to narrow down the list. 

 

Warm Fuzzies (remaining time) 

  • I take the remaining time to review the results of the retro and thank everyone for their time and contributions. 
  • We then agree on the time and date of the next retro and make our farewells. 

 

Retrospective can have an amazing impact on your team when done regularly and respectively. One hour every other week can have a big impact on the cohesiveness of your team. Give your company a brighter, happier future by giving retros a try. 

For more information on learning to lead your own retro one day, check out Agile Retrospectives: Making Good Teams Great. If you're ready to get started, reach out to have us lead your team retrospectives.
 



The post Team Retrospectives: A How-To Guide for Team Building appeared first on Binary Noggin.

]]>
ElixirConf 2022 Productivity Takeaways https://binarynoggin.com/blog/elixirconf-2022-productivity-takeaways/ Fri, 09 Dec 2022 18:29:23 +0000 https://binarynoggin.com/?p=2994 I am a huge fan of improving development workflows. The less I have to think about incidental things, the more I can concentrate on the problems that matter. One talk from ElixirConf 2022 helped improve my development workflow. Jason Axelson doled out some essential tips to speed up Elixir development. A few quick wins for […]

The post ElixirConf 2022 Productivity Takeaways appeared first on Binary Noggin.

]]>
I am a huge fan of improving development workflows. The less I have to think about incidental things, the more I can concentrate on the problems that matter. One talk from ElixirConf 2022 helped improve my development workflow.

Jason Axelson doled out some essential tips to speed up Elixir development. A few quick wins for me included setting the ELIXIR_EDITOR and ERL_AFLAGS environment variables. Setting ELIXIR_EDITOR to your favorite editor allows you to quickly open module and function definitions for a given library or Elixir itself in your favorite editor from iex.

Visual Studio Code

export ELIXIR_EDITOR="code --goto __FILE__:__LINE__"

Emacs

export ELIXIR_EDITOR="emacs +__LINE__ __FILE__"

 

Once you have your favorite editor setup, you can start opening and exploring the code. This is a great way to learn and find out more when you are stuck on a problem outside of your application code.

iex(1)> open Phoenix.LiveView

When the ERL_AFLAGS environment variable is set you can search previously executed commands in iex across sessions. Without the ERL_AFLAGS set, you can only search the current iex session. The same keyboard shortcuts for your shell (bash is Ctrl+r). Instead of re-typing commands over and over, we can search and make a selection and re-run a command.

Shell

export ERL_AFLAGS="-kernel shell_history enabled"

 

Open iex and follow along.

iex(1)> IO.puts("Hello Binary Noggin")

Hello Binary Noggin

:ok

iex(2)> IO.inspect("Binary Noggin says hello")

"Binary Noggin says hello"

"Binary Noggin says hello"

iex(3)> # Ctrl+r and type puts. If you  This will look like the below replacing iex(3)>. Now hit ~enter~ and it'll choose the ~IO.puts~ command we executed. Now we can make changes or re-execute by hitting the ~enter~ key.

(search)`puts': IO.puts("Hello World")

 

I also learned about the library ExSync for Elixir code recompiling and reloading. I have added this to my workflow so I can get non-Phoenix applications to be rebuilt on code changes. Try out ExSync by adding the below to your mix.exs dependencies. It is really useful when you are developing a library and want to play around with it in iex and have it reload between changes.

{:exsync, "~> 0.2", only: :dev}

 

These tips have sped up my development processes, but Jason has many more. You can find those on Jason's slides.

The post ElixirConf 2022 Productivity Takeaways appeared first on Binary Noggin.

]]>
Guarantees with Ecto.Repo.update_all/3 https://binarynoggin.com/blog/guarantees-with-ecto-repo-update_all-3/ Fri, 09 Dec 2022 18:19:27 +0000 https://binarynoggin.com/?p=2984 We all want guarantees. We can get guarantees on our automobiles, homes, phones, shoes, food, and even our pets. We’ll take a guarantee in, on, or about anything. As software developers, our clients want guarantees for a whole host of things, from delivery dates to service uptime. Some of those are difficult to promise. What […]

The post Guarantees with Ecto.Repo.update_all/3 appeared first on Binary Noggin.

]]>
We all want guarantees. We can get guarantees on our automobiles, homes, phones, shoes, food, and even our pets. We’ll take a guarantee in, on, or about anything. As software developers, our clients want guarantees for a whole host of things, from delivery dates to service uptime. Some of those are difficult to promise.

What if we need to provide a guarantee around selling limited-stock items? Let’s imagine ourselves as software developers engaged in delivering an application for managing inventory. Multiple users will access this application at any given moment to update stock levels and sales of our client’s famous and highly-desirable widgets. With so much activity updating the state of our data, we need a straightforward way to avoid race conditions. Follow along to see how we use Ecto.Repo.update_all/3 to provide these guarantees.

Our first pass at handling this behavior may be a little naive, but we’re confident we can solve this problem.

@doc """
Take one from the widget inventory count.
"""
def decrease_widget_inventory_count_naive(%Widget{} = widget, amount)
when is_integer(amount) and amount > 0 do
   if widget.inventory_count >= amount do
        widget
        |> Ecto.Changeset.change(
            %{inventory_count: widget.inventory_count - amount}
        )
        |> UpdateAll.Repo.update()
    else
        {:error, :failed_out_of_stock}
    end
end

Above, we get the current stock level from the database. Once we have that, we also check that the requested amount exceeds or equals the amount we have in stock; otherwise, we return an error to the user. If we have the amount in stock, we create a changeset with the modified stock amount and pass that to the Repo module to update the database. Job done! We can pat ourselves on the back and head to the kava bar, right?

What if more than one user at a time is purchasing Fancy Widget Co.’s widgets through our application? Hmm. That might actually be a problem. We capture the inventory_count with no guarantee that the value hasn’t changed between the time we capture it and the time we update the database. What if someone else has changed this widget’s inventory_count since then? We would be writing state to our database that’s incorrect. That’s problematic at best.

Let’s write a test real quick to see if this is as bad as we think it is.

test "two users buy a widget at the same time only one widget is available" do
    # each user fetching the value from the database at the same time
    {:ok, widget} = Inventory.create_widget(
        %{name: "BinaryNoggin Conference", inventory_count: 1}
    )

    # each user getting their widget with the same set of data
    tasks = [
        Task.async(Inventory, :decrease_widget_inventory_count_naive, [widget, 1]),
        Task.async(Inventory, :decrease_widget_inventory_count_naive, [widget, 1])
    ]

    # capture results from both users
    good_sales = tasks
    |> Task.await_many()
    |> Enum.filter(&(match?({:ok, _}, &1)))
    |> Enum.count()

    updated_widget = Repo.reload!(widget)

    # oh no! both users have purchased a widget when we only had one to sell!
    assert updated_widget.inventory_count == widget.inventory_count - good_sales
    assert updated_widget.inventory_count >= 0
end
1) test widgets two users 'buy' a widget at the same time when only one widget is available
test/update_all/inventory_test.exs:61
Assertion with == failed
code:  assert updated_widget.inventory_count == widget.inventory_count - good_sales

left:  0
right: -1
stacktrace:
test/update_all/inventory_test.exs:80: (test)

Finished in 0.09 seconds (0.00s async, 0.09s sync)
10 tests, 1 failure, 9 excluded

With only one item in stock according to our database, we’ve allowed two users to each purchase a widget. We’ll have some explaining to do if we ship this. Alright, let’s take another shot at this and see if Ecto.Repo.update_all/3 can help us provide the guarantee we’re looking for.

@doc """
Updates widget inventory count by amount

## Examples
iex> update_widget_inventory_count(widget, 10)
{:ok, :successful}
{:error, :failed_out_of_stock}
"""

def decrease_widget_inventory_count(widget, amount)
when is_integer(amount) and amount > 0 do
    negative_amount = -amount
    query = from(
        w in Widget,
        where: w.id == ^widget.id and w.inventory_count >= ^amount,
        select: w,
        update: [inc: [inventory_count: ^negative_amount]]
    )

    case UpdateAll.Repo.update_all(query, []) do
        {1, [widget]} -> {:ok, widget}
        _ -> {:error, :failed_out_of_stock}
    end
end

Let’s walk through the above function a bit and see if we’ve made a better choice than before. We’ve written a query to check for our widget and to see if there is sufficient inventory_count for us to purchase. Also, as part of our query, we’ve included an update operation. Passing the inc or increment operator to the update operation allows us to directly change the value in the database only if the conditions of the query are met. We previously stored an inverted value of our amount since there is no dec or decrement operator. We get a return tuple with the number of items updated and a nil. Any other results would indicate an error that we pass back to the user.

This bears all the hallmarks of an acceptable solution. Let’s put it to the test.

test "two users 'buy' a widget at the same time when only one is available" do
    # each user fetching the value from the database at the same time
    {:ok, widget} = 
        Inventory.create_widget(
            %{name: "BinaryNoggin Conference", inventory_count: 100}
        )

    # each user getting their widget with the same set of data
    tasks = [
        Task.async(Inventory, :decrease_widget_inventory_count, [widget, 1]),
        Task.async(Inventory, :decrease_widget_inventory_count, [widget, 1])
    ]

    # capture results from both users
    good_sales = tasks
    |> Task.await_many()
    |> Enum.filter(&(match?({:ok, _}, &1)))
    |> Enum.count()

    updated_widget = Repo.reload!(widget)

    assert updated_widget.inventory_count == widget.inventory_count - good_sales
    assert updated_widget.inventory_count >= 0
    # well look at that we've prevented a false sale!
    # we've avoided a really uncomfortable problem with atomicity. 
    # we've now proven our guarantee!
end
Excluding tags: [:test]
Including tags: [line: "84"]
.
Finished in 0.08 seconds (0.00s async, 0.08s sync)
10 tests, 0 failures, 9 excluded

We’ve just executed an atomic action. With our database acting as our ‘traffic cop’, we cannot exceed the number of available widgets when selling to client customers. We have avoided a messy business situation involving apologies, hurt feelings, and wasted time. All other things being equal, we can now offer this guarantee to our clients.

There are caveats when using tools and update_all/3 is no exception. When using update_all/3, only the fields included in your query are updated. Ecto-generated timestamps are not updated unless you specifically include and update them.

Using Ecto.Repo.update_all/3 allows us to make promises to our clients and their customers that we can feel good about. There are, of course, a number of other uses for update_all/3. Reference the docs here for more information about it.

Update - 2022/12/21

We've received some concerns challenging the assertion that what we describe above is an atomic action. When we referenced the documentation for transaction isolation in Postgres 15 we found the following:

Read Committed is the default isolation level in PostgreSQL. When a transaction uses this isolation level, a SELECT query (without a FOR UPDATE/SHARE clause) sees only data committed before the query began; it never sees either uncommitted data or changes committed during query execution by concurrent transactions. In effect, a SELECT query sees a snapshot of the database as of the instant the query begins to run. However, SELECT does see the effects of previous updates executed within its own transaction, even though they are not yet committed. Also note that two successive SELECT commands can see different data, even though they are within a single transaction if other transactions commit changes after the first SELECT starts and before the second SELECT starts.

Our original concern was avoiding a state change from another transaction or event while we are in the process of executing _our_ transaction. So our assertion that using `update_all/3` is _atomic_ does not hold up.

This did lead us to try to test the control we get from using our `update_all/3` solution and to ask, "What do we want from this code?" Our answer was simple, "We don't want to sell widgets when there are none in stock." Let's explore the test we created to poke at our code:

@tag timeout: 130_000
test "more purchase attempts than re-stocks attempts are made" do
    # each user fetching the value from the database at the same time
    {:ok, widget} = Inventory.create_widget(
        %{name: "BinaryNoggin Conference", inventory_count: 1}
    )
    # return actions for evaluation
    tasks = [
        Task.async(__MODULE__, :increase_widget_count, [widget, 500]),
        ## Note to the reader: reduced for the sake of brevity
        Task.async(__MODULE__, :decrease_widget_count, [widget, 700])
    ]

    # capture results from many users
    actions = tasks
    |> Task.await_many(130_000)

    IO.inspect(widget.inventory_count, label: "Original stock")
    good_increments = actions
    |> Enum.filter(&(match?{:increased, _count}, &1))
    |> Enum.reduce(0, fn {:increased, count}, acc -> acc + count end)
    |> IO.inspect(label: "Re-stocks")

    good_sales = actions
    |> Enum.filter(&(match?{:decreased, _count}, &1))
    |> Enum.reduce(0, fn {:decreased, count}, acc -> acc + count end)
    |> IO.inspect(label: "Sales")

    updated_widget = Repo.reload!(widget)
    IO.inspect(updated_widget.inventory_count, label: "Final count")

    assert updated_widget.inventory_count == 
        widget.inventory_count + good_increments - good_sales
    assert updated_widget.inventory_count == 0
end

This test should apply a significant amount of pressure from multiple 'users' buying widgets and restocking the widgets at the same time. Let's run it and see the results:

Excluding tags: [:test]
Including tags: [line: "84"]
Original stock: 1
Re-stocks: 6145
Sales: 6146
Final count: 0
.
Finished in 24.1 seconds (0.00s async, 24.1s sync)

We've run the above test over a hundred times and we haven't been able to create a fail condition around selling stock that doesn't exist. We have failed for insufficient database connections or waiting too long for a database connection to become free for the `Repo` to use.

Currently, our takeaway from all of this, is that you can probably solve the example problem with code like this. Some monitoring/alerting around these actions using something like `telemetry` would be advisable.

If you require absolute guarantees you should probably be looking into Transaction Isolation in the Postgres docs. If you can bare some 'slippage' monitoring and the `inc`, and `dec` behavior exposed in `update_all/3` can solve your problem without the performance cost of row/table locking or transaction isolation.

Thank you so much to those who engaged with us on this topic. We relish the opportunity to discuss these concepts.

The post Guarantees with Ecto.Repo.update_all/3 appeared first on Binary Noggin.

]]>
The Lifecycle of a Phoenix LiveView https://binarynoggin.com/blog/the-lifecycle-of-a-phoenix-liveview/ https://binarynoggin.com/blog/the-lifecycle-of-a-phoenix-liveview/#respond Wed, 30 Nov 2022 21:36:13 +0000 https://binarynoggin.com/?p=2966 The LiveView request lifecycle runs twice when a connection is first made to your application. It runs once to render static content for web crawlers, search engines, and other non-javascript-enabled clients. The second pass occurs when the browser establishes the websocket connection that sends events and data back and forth between the application and the […]

The post The Lifecycle of a Phoenix LiveView appeared first on Binary Noggin.

]]>
The LiveView request lifecycle runs twice when a connection is first made to your application. It runs once to render static content for web crawlers, search engines, and other non-javascript-enabled clients. The second pass occurs when the browser establishes the websocket connection that sends events and data back and forth between the application and the browser.

Let's walk through the processes and arguments of the request and response lifecycle to see what we can learn about LiveView. Recently, Kernel.dbg/2 was added to Elixir in version v1.14. The Output from dbg is verbose but helpful in understanding what is happening in our codebase. We will limit the output here to the most relevant information, but you can follow along by generating a project.

Create a new Phoenix LiveView project with mix phx.new live_view_lifecycle. Create a new Phoenix LiveView mix phx.gen.live Accounts User users name:string age:integer. Now that we have the basic plumbing, let's start our server and explore this canyon twice.

The three behaviors that LiveView passes through when a request is made:

  • Phoenix.LiveView.mount/3
  • Phoenix.LiveView.handle_params/3
  • Phoenix.LiveView.render/1

Take your time to play around with the generated code and add a few users to the application.

Now let's add query parameters to the URL we will trace. Point our browser to http://localhost:4000/users?binary=noggin and start to look into how our application loads.

The first stop on our adventure train is mount/3. This is the first function that is called when the user makes a request. There is a default implementation in LiveView, so you don't have to implement your own. We’ve implemented our own to control how and when data is sent over the socket to the front-end.

def mount(_params, _session, socket) do
  dbg()

  if connected?(socket) do
    {:ok, assign(socket, :users, list_users()) |> dbg()}
  else
    {:ok, assign(socket, :users, []) |> dbg()}
  end
end

The first thing of note is connected?(socket). This function checks if the socket is connected. On the first pass, the socket is not connected and is used for static rendering. The first call to dbg/0 prints out our function's binding, including incoming arguments.

[lib/live_view_lifecycle_web/live/user_live/index.ex:8: LiveViewLifecycleWeb.UserLive.Index.mount/3]
binding() #=> [
  ...
  _params: %{"binary" => "noggin"},
  _session: %{},
  socket: #Phoenix.LiveView.Socket<
    ...
    assigns: %{
      __changed__: %{}
    },
    transport_pid: nil,
    ...
  >
]

Our incoming arguments include our query parameters as a map of string keys to string values. Keeping the keys as strings instead of atoms keeps malicious users from filling up all of our memory with bogus parameters. Our session is empty, but in a real application, it may be filled with interesting information about the user. Our assigns have no changes, and no data, yet.

Since we are not connected, we move to the else clause and assign users to an empty list. Getting a list of users is time-consuming, and we only want it to happen once when someone is connecting. We will get that information on the next pass. An interesting thing to remember is that LiveView begins tracking what has changed in the assigns. __changed__: %{users: true} LiveView tracks these changes to determine what new information needs to be sent across the websocket. Only sending updated data that changed reduces the load on the connection.

[lib/live_view_lifecycle_web/live/user_live/index.ex:13: LiveViewLifecycleWeb.UserLive.Index.mount/3]
assign(socket, :users, []) #=> #Phoenix.LiveView.Socket<
    ...
    assigns: %{
      __changed__: %{users: true},
      users: []
    },
    transport_pid: nil,
    ...
  >

On the second pass, the websocket is now connected, and we get to a real list of users. When we enter mount/3, we notice one change: the socket now has a transport_pid. This is how connected?/1 determines if the socket is connected.

[lib/live_view_lifecycle_web/live/user_live/index.ex:8: LiveViewLifecycleWeb.UserLive.Index.mount/3]
binding() #=> [
  ...
  _params: %{"binary" => "noggin"},
  _session: %{},
  socket: #Phoenix.LiveView.Socket<
    ...
    assigns: %{
      __changed__: %{},
      users: [],
      open_modal: true
    },
    transport_pid: #PID<0.111.0>,
    ...
  >
]

Since the socket is now connected, the users are fetched from the database and added to the assigns. Once again, LiveView marks the users as changed, so it knows the user list needs to be sent across the socket.

[lib/live_view_lifecycle_web/live/user_live/index.ex:11: LiveViewLifecycleWeb.UserLive.Index.mount/3]
assign(socket, :users, [%User{name: “Johnny”, age: 39}]) #=> #Phoenix.LiveView.Socket<
    ...
    assigns: %{
      __changed__: %{users: true},
      users: [%User{name: “Johnny”, age: 39],
      open_modal: true
    },
    transport_pid: #PID<0.111.0>,
    ...
  >

The second stop on our adventure is handle_params/3. The handle_params/3 callback is a great place to process different URLs or parameters that may change how the page needs to display, such as opening or closing a modal. handle_params/3 gets called any time that our parameters change, and we are already connected to this LiveView. This often happens when we call push_patch/2 to keep our current LiveView but change the URL.

def handle_params(_params, _url, socket) do
    dbg()
    {:noreply, assign(socket, :open_modal, true) |> dbg()}
  end

The incoming socket already has the changes from the call to mount/3. We are reducing over the socket, and each function receives the results of the previous callback. We also get the URL that was requested by the end user. This is the only place where we receive the URL, and it can be a good place to add on functionality. Instead of a modal parameter, we might have the user directed to /users/new and decide to open the modal based on the URL.

[lib/live_view_lifecycle_web/live/user_live/index.ex:69: LiveViewLifecycleWeb.UserLive.Index.handle_params/3]
binding() #=> [
  _url: "http://localhost:4000/users?binary=noggin",
  _params: %{"binary" => "noggin"},
  socket: #Phoenix.LiveView.Socket<
    ...
    assigns: %{
      __changed__: %{users: true},
      users: []
    },
    ...
  >
]

Since we have updated the open_modal value, we once again add an entry to __changed__. The open_modal value of true signals the front end to display the open modal when rendering

[lib/live_view_lifecycle_web/live/user_live/index.ex:70: LiveViewLifecycleWeb.UserLive.Index.handle_params/3]
assign(socket, :open_modal, true) #=> #Phoenix.LiveView.Socket<
    ...
    assigns: %{
      __changed__: %{users: true, open_modal: true},
      users: [],
      open_modal: true
    },
    transport_pid: nil,
    ...
  >

On our second time through, during the connected state, things look exactly as before except we have the new updated socket from the connected mount call.

[lib/live_view_lifecycle_web/live/user_live/index.ex:70: LiveViewLifecycleWeb.UserLive.Index.handle_params/3]
assign(socket, :open_modal, true) #=> #Phoenix.LiveView.Socket<
    ...
    assigns: %{
      __changed__: %{users: true, open_modal: true},
      users: [%User{name: “Johnny Otsuka”, age: 39}],
      open_modal: true
    },
    transport_pid: #PID<0.111.0>,
    ...
  >

The last callbacks in our loop is render/1. We implement a LiveView template using render/1 but you can also create a LiveView template via a file named index.html.heex with the same markup, and it'll behave in the same way.

def render(assigns) do
    dbg()

    ~H"""
    <h1>Listing Users</h1>
    ...
    <table>
      <thead></thead>
        <tr>
          <th>Name</th>
          <th>Age</th>
          ...
        </tr>
      </thead>
      <tbody id="users">
        <%= for user <- @users do %>
          <tr id={"user-#{user.id}"}>
            <td><%= user.name %></td>
            <td><%= user.age %></td>
            ...
          </tr>
        <% end %>
      </tbody>
    </table>
    """
    |> dbg()

The last stop before the data is shipped to the browser is render/1. The assigns are taken out of the socket and passed into this function, and the values are made available to the template. We use @ inside the template as a shortcut to access the values inside the assigns map.

[lib/live_view_lifecycle_web/live/user_live/index.ex:23: LiveViewLifecycleWeb.UserLive.Index.render/1]
binding() #=> [
  assigns: %{
	__changed__: %{page_title: true, user: true, users: true},
	flash: %{},
	live_action: :index,
	page_title: "Listing Users",
	socket: #Phoenix.LiveView.Socket<
  	id: "phx-Fx0gkN3mtBW1lABk",
  	endpoint: LiveViewLifecycleWeb.Endpoint,
  	view: LiveViewLifecycleWeb.UserLive.Index,
  	parent_pid: nil,
  	root_pid: #PID<0.111.0>,
  	router: LiveViewLifecycleWeb.Router,
  	assigns: #Phoenix.LiveView.Socket.AssignsNotInSocket<>,
  	transport_pid: #PID<0.111.0>,
  	...
	>,
	user: nil,
	users: [
  	  %User{
    	       updated_at: ~N[2022-10-11 20:09:00],
    	       inserted_at: ~N[2022-10-11 20:09:00],
    	       name: "Johnny Otsuka",
    	       age: 39,
    	       id: 1,
    	       __meta__: #Ecto.Schema.Metadata<:loaded, "users">
  	  }
	]
  }
]

The output of the render function is a Phoenix.LiveView.Rendered struct. The fingerprint is the most interesting part. The rendering engine uses the fingerprint to identify the rendered html and decide if it has already been rendered and if it is possible to skip the transmission of the rendered data. Since these can also be nested, LiveView doesn’t have to go to the bottom of the tree if it hits a known fingerprint.

[(live_view_lifecycle 0.1.0) lib/live_view_lifecycle_web/live/user_live/index.ex:63: LiveViewLifecycleWeb.UserLive.Index.render/1]
~H"""
<h1>Listing Users</h1>

...
""" #=> %Phoenix.LiveView.Rendered{
  static: ["<h1>Listing Users</h1>\n\n",
   "\n\n<table>\n  <thead>\n	<tr>\n  	<th>Name</th>\n  	<th>Age</th>\n\n  	<th></th>\n	</tr>\n  </thead>\n  <tbody id=\"users\">\n	",
   "\n  </tbody>\n</table>\n\n<span>", "</span>"],
  dynamic: #Function<0.132121069/1 in LiveViewLifecycleWeb.UserLive.Index.render/1>,
  fingerprint: 299956156419391184196309507721197495057,
  root: false
}

The lifecycle of a Phoenix LiveView starts as a static HTML request. When the request reaches our server, mount/3 to handle_params/2 get their chances to update the socket, and often the assigns. Next, the websocket is connected between the browser and a LiveView process. Once that connection is made, the LiveView process executes the same functions used to generate the static HTML response in order to generate the websocket connection. From this point on, the LiveView process maintains the state between the client and server. Updates on the back end render on the front end in near real time. Take the time to explore further and add some functionality to your application. Keeping the dbg calls while you play can help solidify your understanding of the LiveView process.

References

https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#module-life-cycle https://pragmaticstudio.com/tutorials/the-life-cycle-of-a-phoenix-liveview

The post The Lifecycle of a Phoenix LiveView appeared first on Binary Noggin.

]]>
https://binarynoggin.com/blog/the-lifecycle-of-a-phoenix-liveview/feed/ 0
Building Embedded Systems in the Modern Era https://binarynoggin.com/blog/building-embedded-systems-in-the-modern-era/ https://binarynoggin.com/blog/building-embedded-systems-in-the-modern-era/#respond Wed, 12 Oct 2022 00:00:13 +0000 https://binarynoggin.com/?p=2880 The post Building Embedded Systems in the Modern Era appeared first on Binary Noggin.

]]>

I remember the early days of hacking small devices with a single purpose. Many of them lived unconnected and provided one bit of functionality. Maybe you remember that day too. Perhaps you were a professional working on a sprinkler controller or a hobbyist that wanted to have an animated pumpkin for Halloween. Either way, you likely used C or Python as your device language. Those days were great. I spent them exploring and learning. As I grew, the world changed.

Now, our companies rely on data. Data is more valuable than gold. Even our hobby projects live connected. Our consumers assume devices are always connected, and products keep up with constant data streams. We need devices to be constantly connected. We need them to process data quickly and move it to remote storage. We need systems to be fault-tolerant to reduce the loss of any of this precious data. Can we use better tools to accomplish these new goals? Sometimes, you have to have the Nerves to try something new.

One of the scariest moments in any embedded team's history is updating the hardware. The numerous variables during an upgrade only increase when devices are no longer in the hands of our team. Don't worry, digital-slinger-of-bits-and-solder! Nerves has us covered. Nerves systems ship with the ability to send over-the-air updates and an A/B partition. When we first send our devices to the field, they are sent with firmware safely running in the A partition.

Let's examine a scenario: One day, a feature request is made. We diligently do our updates and test the deployment of the updated system to multiple devices. Nerves prepares our devices for the unexpected scenarios that our testing doesn't cover. When we deploy the update to the devices in the fields, the new version of the firmware gets loaded onto the B partition. The system restarts using the B partition to load the firmware. B becomes the primary partition when the firmware starts and passes health checks. Unfortunately, one of the devices is in a state that never occurred in our lab. The system restarts using the B partition, but an error occurs during our health checks. The device then reboots into the previous A partition, running the previous version of our firmware. It will happily run the old version of the firmware until we can make a fix and push an updated version. We didn't have to write extra code or worry about our user's device turning into a new paperweight. Nerves came to the rescue with a well-thought-out deployment strategy.

Systems deployed outside the lab often run into unexpected scenarios, triggering a reboot. The reboot times on resource-constrained devices can take an eternity. This delay is not ideal in mission-critical applications or when users are waiting. Nerves, built on Elixir, provides tools that simplify the process of monitoring and restarting subsystems. Our system can continue to run while subsystems restart within milliseconds, saving valuable time and ensuring that our products remain usable. With these tools and focusing on recovering from errors, our devices are more fault-tolerant without extended work efforts.

Devices are more connected than ever. Ericsson built OTP in 1995 for distributed application management, resource monitoring, servers, and event handling. With more than two battle-tested decades of ensuring networks continue to function, OTP has the developer tools to keep communication lines open and operating under real-world conditions without putting strain on the development team. Building on those tools, Nerves has a host of libraries and tools that interface with many of today's popular networking options.

When using traditional embedded languages, your company often maintains two development teams (embedded and server teams) instead of one. We want to build a seamless experience for our customers between the two systems. Creating a seamless system becomes difficult with the teams' differing concerns and tools. Conway's law states, "Any organization that designs a system will produce a design whose structure is a copy of the organization's communication structure." One development team builds a system that behaves as one cohesive entity. Elixir allows both devices and servers to use the same technology, vocabulary, and thought processes, reducing the friction of communication and context-switching among the team members.

Choosing Nerves over other tools provides fault tolerance, high availability, deployment confidence, and network communication tools, allowing our teams to focus on business solutions instead of stability. We can reduce our communication overhead and budgets by having a single, cohesive team working on the server and the devices.

 

Amos King is the founder and CEO of Binary Noggin. A leading expert in emerging software languages, Amos is also a frequent speaker at conferences like ElixirConf and Lonestar Elixir and co-host of the popular Elixir Outlaws and This Agile Life podcasts.

Founded in 2007, Binary Noggin is a team of software engineers and architects who serve as a trusted extension of your team, helping your company succeed through collaboration. We forge customizable solutions using Agile methodologies and our mastery of Elixir, Ruby and other open source technologies. Share your ideas with us on Facebook and Twitter 

The post Building Embedded Systems in the Modern Era appeared first on Binary Noggin.

]]>
https://binarynoggin.com/blog/building-embedded-systems-in-the-modern-era/feed/ 0
Dynamic Form Inputs in Elixir LiveView https://binarynoggin.com/blog/dynamic-form-inputs-in-elixir-liveview/ https://binarynoggin.com/blog/dynamic-form-inputs-in-elixir-liveview/#respond Thu, 25 Aug 2022 00:00:04 +0000 https://binarynoggin.com/?p=2692 I recently found myself addressing a product requirement involving a form with input fields generated from a list of data we retrieved from a third-party provider. This requirement proved challenging as the prescribed user experience stretched a little beyond the familiar tools for form building in Elixir. Even more frustrating, the wireframes presented the data […]

The post Dynamic Form Inputs in Elixir LiveView appeared first on Binary Noggin.

]]>
I recently found myself addressing a product requirement involving a form with input fields generated from a list of data we retrieved from a third-party provider. This requirement proved challenging as the prescribed user experience stretched a little beyond the familiar tools for form building in Elixir.

Even more frustrating, the wireframes presented the data as a table with checkboxes for each row and not a select_multiple type element.

To recreate a minimal working example, I started my own survey company.

The survey company is responsible for gathering lists of people's favorite animals. We get this list from an external API and then need to allow a user to make a selection on our surveys page.

For simplicity's sake, we will only work with a hardcoded list of strings to represent our external data.

Modeling the Survey

Our survey data is modeled as such:

  1. Name
  2. Favorite animals

I've added two fields to represent each of our examples from the form. One for a select multiple element and one for a checkbox group type element.

schema "surveys" do
  field :name, :string
  field :favorite_animal_select_multiple, {:array, :string}
  field :favorite_animal_checkbox_group, {:array, :string}
  timestamps()
end

Structuring the Form Around the Survey Schema

Given that list of animal choices is outside of our direct control:

  1. How do we represent the form data as a list of strings while presenting the field as a group of checkboxes?
  2. How can we use our schema changeset for form validations?

The phoenix documentation has a really nice example on how to implement a similar behavior with a multi-select element (multiple_select/4). https://hexdocs.pm/phoenix_html/Phoenix.HTML.Form.html#multiple_select/4

multiple_select(f, :favorite_animal_select_multiple, ["Phoenix", "Wallaby", "Numbat"])

Returns the following HTML:

<select id="favorite-animals-survey-form_favorite_animal_select_multiple" multiple="" name="survey[favorite_animal_select_multiple][]"><option value="Phoenix">Phoenix</option><option value="Wallaby">Wallaby</option><option value="Numbat">Numbat</option></select>

Generating Unserialized Inputs

Default adding a checkbox adds a key to the map that represents the form. If we generated our form inputs in a loop, we end up with some html like this:

["Phoenix", "Wallaby", "Numbat"]
|> Enum.map(&(~s(<input type="checkbox" name="#{&1}">)))
|> Enum.join()

Returns the following markup:

<input type="checkbox" name="Phoenix" value="Phoenix">
<input type="checkbox" name="Wallaby" value="Wallaby">
<input type="checkbox" name="Numbat" value="Numbat">

*** In case you’re not familiar with the ~s() syntax, we are using a sigil for the convenience of not having to escape the double quotes in the example.

https://elixir-lang.org/getting-started/sigils.html#strings-char-lists-and-word-lists-sigils

The code above creates a map representation of our form that grows by one key every time a new animal gets added to our survey.

%{
  "Phoenix" => "Phoenix",
  "Wallaby" => "Wallaby",
  "Numbat" => "Numbat",
  "Honey Badger" => "Honey Badger"
}

We need a different solution if we want to use our survey changeset for validations.

Let’s compare the checkbox data against what we get from Phoenix’s select_multiple/4 function.

%{
  "Favorite_animal_select_multiple" =>
  ["Phoenix", "Wallaby", "Numbat"]
}

This data conforms to our schema nicely, so how can we implement this array structure for checkboxes?

There is, unfortunately, no shortcut here like there is with select_multiple, but if we inspect the markup that is generated by this function, we can borrow a few html patterns to group our checkboxes into a single field.

<select id="favorite-animals-survey-form_favorite_animal_select_multiple" multiple="" name="survey[favorite_animal_select_multiple][]"><option value="Phoenix">Phoenix</option><option value="Wallaby">Wallaby</option><option value="Numbat">Numbat</option></select>

The name attribute syntax is exactly what we need to model our checkboxes as a collection.

name="survey[favorite_animal_select_multiple][]"

We can also group the checkboxes in html using a fieldset element.

<fieldset id="favorite-animals-survey-form_favorite_animal_checkbox_group">
  <label>What is Your Favorite Animal?</label>
  <%= for animal <- @animal_choices do %>
    <input type="checkbox" name="survey[favorite_animal_checkbox_group][]" value={animal} /><%= animal %><br />
  <% end %>
</fieldset>

Now when we inspect our form data from the submit event, we can see it conforms to the same array type structure as the select_multiple.

%{
  "favorite_animal_checkbox_group" =>
  ["Phoenix", "Wallaby", "Numbat"]
}

Using a Changeset to Display Errors

Now that our form fields pass the correct data structure to the backend, we still need to validate it and show errors to the user. We can do so by adding an error-tag.

<%= error_tag(f, :favorite_animal_checkbox_group) %>

We run the form data through our survey changeset for validation when the form is submitted.

# lib/select_multiple/surveys/survey.ex
def changeset(survey, attrs) do
  survey
  |> cast(attrs, [:name, :favorite_animal_select_multiple, :favorite_animal_checkbox_group])
  |> validate_required([
    :name,
    :favorite_animal_select_multiple,
    :favorite_animal_checkbox_group
  ])
end

Our entire form component looks as such in the template:

# lib/select_multiple_web/live/survey_live/form_component.html.heex
<.form
  let={f}
  for={@changeset}
  id="favorite-animals-survey-form"
  phx-target={@myself}
  phx-submit="save"
>
  <%= label(f, :name) %>
  <%= text_input(f, :name) %>
  <%= error_tag(f, :name) %>
  <label>What is Your Favorite Animal?</label>
  <%= multiple_select(f, :favorite_animal_select_multiple, @animal_choices) %>
  <fieldset id="favorite-animals-survey-form_favorite_animal_checkbox_group">
    <label>What is Your Favorite Animal?</label>
 
    <%= for animal <- @animal_choices do %>
      <input type="checkbox" name="survey[favorite_animal_checkbox_group][]" value={animal} /><%= animal %><br />
    <% end %>
 
    <%= error_tag(f, :favorite_animal_checkbox_group) %>
 
  </fieldset>
  <div>
    <%= submit("Save", phx_disable_with: "Saving...") %>
  </div>
</.form>

And our handle_event for form submit

# lib/select_multiple_web/live/survey_live/form_component.ex
 
def handle_event("save", %{"survey" => survey_params}, socket) do
  save_survey(socket, socket.assigns.action, survey_params)
end
 
defp save_survey(socket, :new, survey_params) do
  case Surveys.create_survey(survey_params) do
    {:ok, _survey} ->
      {:noreply,
        socket
        |> put_flash(:info, "Survey created successfully")
        |> push_redirect(to: socket.assigns.return_to)}
 
    {:error, %Ecto.Changeset{} = changeset} ->
      {:noreply, assign(socket, changeset: changeset)}
  end
end

Conclusion

There is a clean, testable way forward to working with form checkbox inputs when your data model is a list of items. multiple_select has great support out of the box, but you can get similar behavior for checkboxes with a little bit of extra markup in your template.

Resources: https://github.com/thejohncotton/dynamic-form-inputs-example

Feedback

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Founded in 2007, Binary Noggin is a team of software engineers who serve as a trusted extension of your team, helping your company succeed through collaboration. We forge customizable solutions using Agile methodologies and our mastery of Elixir, Ruby and other open-source technologies. Share your ideas with us on Facebook and Twitter.

The post Dynamic Form Inputs in Elixir LiveView appeared first on Binary Noggin.

]]>
https://binarynoggin.com/blog/dynamic-form-inputs-in-elixir-liveview/feed/ 0
Binary Noggin Ranks #20 on Kansas City Business Journal Fast 50 List https://binarynoggin.com/blog/binary-noggin-ranks-20-on-kansas-city-business-journal-fast-50-list/ https://binarynoggin.com/blog/binary-noggin-ranks-20-on-kansas-city-business-journal-fast-50-list/#respond Thu, 28 Jul 2022 00:00:24 +0000 https://binarynoggin.com/?p=2648 The post Binary Noggin Ranks #20 on Kansas City Business Journal Fast 50 List appeared first on Binary Noggin.

]]>

Binary Noggin is honored to be ranked #20 on Kansas City Business Journal’s prominent Fast 50 list for 2022. The list features the area’s fastest-growing businesses throughout every industry, all with revenue of $1 million or more in 2021.

Learn more about the company’s ranking in the Kansas City Business Journal.

Read the Article

 

The post Binary Noggin Ranks #20 on Kansas City Business Journal Fast 50 List appeared first on Binary Noggin.

]]>
https://binarynoggin.com/blog/binary-noggin-ranks-20-on-kansas-city-business-journal-fast-50-list/feed/ 0
Front-End Testing in Phoenix With Wallaby https://binarynoggin.com/blog/front-end-testing-in-phoenix-with-wallaby/ https://binarynoggin.com/blog/front-end-testing-in-phoenix-with-wallaby/#respond Wed, 29 Jun 2022 00:00:26 +0000 https://binarynoggin.com/?p=2616 Why is front-end testing even a thing? We have LiveView, right? I can run tests on my components and everything is magical. What is Wallaby? You may have heard of Wallaby at the Big Elixir when Britton Broderick gave a talk about using it with LiveView, or maybe you knew of it before that. If […]

The post Front-End Testing in Phoenix With Wallaby appeared first on Binary Noggin.

]]>
Why is front-end testing even a thing? We have LiveView, right? I can run tests on my components and everything is magical.

What is Wallaby?

You may have heard of Wallaby at the Big Elixir when Britton Broderick gave a talk about using it with LiveView, or maybe you knew of it before that. If you aren’t aware of Wallaby, it’s a testing library self-described as help to “test your web applications by simulating realistic user interactions.”

Why do we use Wallaby?

At Binary Noggin, we use Wallaby to cover numerous cases in our Phoenix applications, but we most regularly call on Wallaby for test coverage when we have Javascript and LiveView living on the same page. Wallaby lets us verify behavior in the browser in lieu of verifying the output of a function like most LiveView tests. Lest we draw the ire of the Twittersphere: we use and love LiveView tests; however, when we need some mixed interaction, our LiveView tests don’t quite get us there.

What problem did we encounter that wasn’t already solved?

The simplest way to get started with Wallaby browser testing is using ChromeDriver, a webdriver, to simulate user interactions in Chrome. In order for these tests to work, the version of the Chrome browser and the ChromeDriver server need to match.

When Chrome and the ChromeDriver server version did not match, we experienced all sorts of ambiguous errors in our test results. Usually, after 10 to 20 minutes, someone would remember they had recently updated Chrome, “Ah! I forgot to update ChromeDriver! Give me just a minute while I download the most recent version.”

** (RuntimeError) invalid session id
code: |> Module.view(module.token.id)
stacktrace:
  (wallaby 0.29.1) lib/wallaby/httpclient.ex:136: Wallaby.HTTPClient.check_for_response_errors/1
  (wallaby 0.29.1) lib/wallaby/httpclient.ex:56: Wallaby.HTTPClient.make_request/5
  (wallaby 0.29.1) lib/wallaby/webdriver_client.ex:329: Wallaby.WebdriverClient.visit/2
  (wallaby 0.29.1) lib/wallaby/driver/log_checker.ex:6: Wallaby.Driver.LogChecker.check_logs!/2
  (wallaby 0.29.1) lib/wallaby/browser.ex:1235: Wallaby.Browser.visit/2

Worse yet, we usually experienced this problem one developer after another until everyone had updated their local version of ChromeDriver. You can probably see where this becomes a bit of a time sink.

How did we solve our problem?

After a handful of run-ins with this problem, we realized it would be more convenient for the Wallaby library to tell us our dependencies were not synced instead of letting us get into actually running tests with mismatching versions of Chrome and ChromeDriver.

Initial solution?

First, we found a solution that worked for us locally. We added a check in our Mix alias commands before we executed Wallaby tests to warn us if there was a version mismatch between Chrome and ChromeDriver. This satisfied our immediate need

defmodule Mix.Tasks.Wallaby.Chromedriver do
  @moduledoc "Compares chrome version with chromedriver version and errors when major, minor and build numbers do not match."
  @shortdoc "Errors when chrome and chromedriver version do not align."

  use Mix.Task

  @impl Mix.Task
  def run(_) do
    chrome_driver =
      case System.find_executable("chromedriver") do
        nil ->
          IO.puts("chromedriver is not in PATH")
          exit({:shutdown, 1})

        path ->
          path
      end

    {chrome_driver_version_string, 0} = System.cmd(chrome_driver, ["--version"])

    chrome_driver_version =
      chrome_driver_version_string
      |> String.split(" ")
      |> Enum.at(1)
      |> String.split(".")
      |> Enum.slice(0..2)
      |> Enum.join(".")

    # TODO: figure out how to find google chrome programmaticaly
    {google_chrome_version_string, 0} =
      case :os.type() do
        {:unix, :darwin} ->
          System.cmd("/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome", [
            "--version"
          ])

        {:unix, :linux} ->
          # this assumes a ubuntu-ish install location for chrome
          System.cmd("/usr/bin/google-chrome", ["--version"])
      end

    google_chrome_version =
      google_chrome_version_string
      |> String.split(" ")
      |> Enum.at(2)
      |> String.split(".")
      |> Enum.slice(0..2)
      |> Enum.join(".")

    case Version.compare(google_chrome_version, chrome_driver_version) do
      :gt ->
        IO.puts("you need to download chromedriver #{google_chrome_version}")
        exit({:shutdown, 1})

      _ ->
        IO.puts("yay chrome and chromedriver match nothing to do here.")
    end
  end
end

Not long after, we realized we likely weren’t the only development team experiencing this time sink.

Why did we share our solution?

We could probably save someone else a little bit of time, too, right? At the very least, a human being wouldn’t have to remember what a given obscure error meant in multiple tests. They would know up front that they need to update their tools in order to get their versions of Chrome and ChromeDriver to match.

We reached out to the current maintainer of Wallaby, Mitchell Hanberg (twitter, github), and asked how he felt about our Mix alias solution. While he was interested in the end result, our method left something to be desired. Mitchell asked us if we would consider writing a pull request to add the behavior into the Wallaby library itself.

Current solution

Since then, we’ve rewritten our implementation and included a check in the initial steps of Wallaby initialization. There was already a check for the minimum version of Chrome, and that seemed like a logical place for us to add another dependency version check. We shared our pull request, and after some adjustments, the changes were accepted.

Conclusion

We know maintaining open-source tools and libraries is a challenging endeavor. We’re grateful to the engineers who give their time and expertise to helping the community. We saw this issue as an opportunity to give back. Each time we see this warning displayed in Wallaby, we’ll have a moment of nostalgia and joy that we’ve saved ourselves, and hopefully others, a few minutes of head-scratching.

The post Front-End Testing in Phoenix With Wallaby appeared first on Binary Noggin.

]]>
https://binarynoggin.com/blog/front-end-testing-in-phoenix-with-wallaby/feed/ 0
The Binary Noggin Breakdown: Why Is Collaboration Important in Software Development? https://binarynoggin.com/blog/the-binary-noggin-breakdown-why-is-collaboration-important-in-software-development/ https://binarynoggin.com/blog/the-binary-noggin-breakdown-why-is-collaboration-important-in-software-development/#respond Tue, 14 Jun 2022 00:00:30 +0000 https://binarynoggin.com/?p=2600 The post The Binary Noggin Breakdown: Why Is Collaboration Important in Software Development? appeared first on Binary Noggin.

]]>

If you know anything about Binary Noggin, know that we are a very people-oriented company. It's how we do things and, more importantly, why we do things. We aim to create effective engineering processes for the software and the team. 

Although our team of engineers is scattered across the country, we work side by side through websites like Toucan and Google Meets to allow for collaboration in day-to-day operations. Why do we work this way? Because two heads are better than one! Hear from our team on why collaboration is essential in software development.

Seeing Every Angle to Reduce Surprises. 

Collaboration between all parties is essential in software development, not only with the technology teams but also with stakeholders and end-users. People think differently, and that is a good thing. Collaboration opens up the ability to create resilient software and reduce surprises. Use this to your advantage. — John Cotton, Software Engineer

Enhancing Software and the Teams That Create It.

There are many benefits to collaboration within software development. When multiple people are working on a project, we get the advantage of shared knowledge and a chance to keep each other on track. A pair of software developers have an opportunity to slow each other down and run tests on assumptions made. If the assumptions are untrue, they can avoid building code around those invalid assumptions. We get to check that process more thoroughly and more often through collaboration. The project may take a little longer, or it might not, but either way, the result is code that is more robust, more stable, and less buggy.

Collaboration benefits the team as well as the software. We have to learn how to communicate with each other. Otherwise, one person is doing all the work while the other follows along behind throughout the pairing session. An uneven pairing session does happen sometimes, but we still get the opportunity to learn to understand, communicate, and build empathy and trust with each other while we're working. These are byproducts of sitting people down and working on a project together. It is a helpful experience to borrow knowledge from each other and challenge preconceived notions about a given thing. — Matt Hall, Software Engineer 

Increasing Efficiencies Through Pair Programming.

This question immediately makes me think of pair programming, which is an awesome way to collaborate in software development. I like pair programming because there are two or more people sharing a keyboard, monitor, and microphone while working on a task. Having all these devices shared allows for real-time collaboration as if you were sitting beside the others pairing. Among other benefits, the pair can work through a task very quickly since there isn't a need to chat outside of the current pair. Pairing also helps with team building and knowledge transfer/sharing. — Johnny Otsuka, Software Engineer

Collaboration helps ensure that successful outcomes are firmly rooted and vetted by people with varying degrees of expertise and experience. Our team has decades of experience and a willingness to push the boundaries of development to make more sustainable software for a technology-driven world. Collaboration keeps the innovation and momentum going. Interested in what would happen if you collaborated with us? Reach out! Let's have a conversation!

 

Founded in 2007, Binary Noggin is a team of software engineers who serve as a trusted extension of your team, helping your company succeed through collaboration. We forge customizable solutions using Agile methodologies and our mastery of Elixir, Ruby and other open-source technologies. Share your ideas with us on Facebook and Twitter.

The post The Binary Noggin Breakdown: Why Is Collaboration Important in Software Development? appeared first on Binary Noggin.

]]>
https://binarynoggin.com/blog/the-binary-noggin-breakdown-why-is-collaboration-important-in-software-development/feed/ 0
Questions to Consider When Choosing a Software Development Consultant https://binarynoggin.com/blog/questions-to-consider-when-choosing-a-software-development-consultant/ https://binarynoggin.com/blog/questions-to-consider-when-choosing-a-software-development-consultant/#respond Wed, 04 May 2022 00:00:39 +0000 https://binarynoggin.com/?p=2532 The post Questions to Consider When Choosing a Software Development Consultant appeared first on Binary Noggin.

]]>

Hiring a consultant is a significant commitment, often spanning months or even years depending on the project at hand, but how do you start your search for such a crucial and long-term partnership? Ask yourself and those within your company the following questions before and during your search.

  • What specialties does your project require?

    It will be much easier to narrow down the field of potential consultancies once you identify specialties and technologies that can support your project. A specialized software development consultancy will be able to execute the project efficiently, troubleshoot potential challenges, and develop a scalable end product that will grow with your company.

  • What does a good fit mean for you and your company?

    Consider your unique team and project before diving into your software consultancy search. Identify honestly what qualities you value above and beyond technical capabilities. Do you appreciate frank honesty, a team rooted in research-based methods, or availability for collaboration? The team you choose will be a part of your company for a long period of time, and it is vital that the partnership be secure. Decide what criteria a consultancy must have, and wait for the consultancy that meets your standards and needs. 

  • What is the software development consultancy's reputation in the community?

    Vetting a potential software development partner can be difficult, but it is vital for the success of any project. Do your homework. Ask for references, understand the consultancy’s project background, and dig into the company’s activity within the developer community. Start with an internet search for Slack communities in your identified technologies. Consultants that participate in communities are likely to find and hire the best developers in those communities. The consultancy also has access to feedback on their ideas because there is a broad audience to vet them. Those ideas get brought into your codebase and help it remain healthy.

  • Do you feel confident in the software development consultancy?

    The pillar of every partnership, personal or professional, is trust. Do you trust the software developers within the consultancy you are considering? If a project kicks off without trust, it will be difficult for everyone to execute their work effectively and efficiently. Even if you can’t articulate the why, if you do not feel trust in any individuals within a consultancy, it is better to continue your search.

  • When discussing the potential project, are we on the same page?

    Make sure the messages you are conveying are being repeated back to you. When having conversations with software development consultant candidates, you want to know they understand your goals and mission. A seamless flow of communication should continue from the hiring process to the actual project execution, so ask questions that verify you are being heard. Do not hesitate to ask a potential consultancy if there is any aspect of your project they cannot do.

  • Does the software development consultancy ask probing questions to learn more about the project?

    Questions are valuable when looking for a partner, and they should come from both you and the potential consultancy. A successful meeting will become a conversation with insightful questions about real-world implications and user perspectives. You must be able to get beyond simply the technical capabilities of a software development team to discuss the heart of your business objectives for the project. A consultancy with your long-term success in mind will be a valuable partner.

  • Does the software development consultancy seem excited about the project?

    When considering a software development consultant for your company, look for one who is genuinely excited about your project or product. Suppose a consultant not only asks probing questions but also starts dreaming with you about how your project or product could function in the future. In that case, you can be confident that the partnership will be compatible and mutually beneficial.

Move Forward With Confidence

Arming yourself and your team with these questions will help hone your search for a software development consultant that genuinely meets your needs.

No matter the stage of your search, consider Binary Noggin’s team of experienced and agile software engineers. Contact us to start a conversation today.

The post Questions to Consider When Choosing a Software Development Consultant appeared first on Binary Noggin.

]]>
https://binarynoggin.com/blog/questions-to-consider-when-choosing-a-software-development-consultant/feed/ 0